Search Results

Search found 3983 results on 160 pages for 'partial trust'.

Page 88/160 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Rich text editor for JSF 2

    - by Pradyumna
    Hi, I searched around for a basic WYSIWYG rich text editor that I can use in a JSF 2 (VDL) application, but found nothing satisfactory.. in the sense that: The editor is very extensive, and not configurable (like PrettyFaces) The editor doesn't work with VDL (like RichFaces) Multiple instances of the editor cannot be used on the same page (like Tomahawk t:htmlArea) I actually don't need all the fancy things like Fonts, Indenting/justification, undo/redo... just Bold, Italic, Lists and Hyperlinks would suffice. Do you know of something that works well in this scenario, as well as gives out XHTML compliant markup, and works well with partial page refreshes(f:ajax), or would you recommend that I write my own? Thank you! Pradyumna

    Read the article

  • Java webapp: adding a content-disposition header to force browsers "save as" behavior

    - by WizardOfOdds
    Even though it's not part of HTTP 1.1/RFC2616 webapps that wish to force a resource to be downloaded (rather than displayed) in a browser can use the Content-Disposition header like this: Content-Disposition: attachment; filename=FILENAME Even tough it's only defined in RFC2183 and not part of HTTP 1.1 it works in most web browsers as wanted. So from the client side, everything is good enough. However on the server-side, in my case, I've got a Java webapp and I don't know how I'm supposed to set that header, especially in the following case... I'll have a file (say called "bigfile") hosted on an Amazon S3 instance (my S3 bucket shall be accessible using a partial address like: files.mycompany.com/) so users will be able to access this file at files.mycompany.com/bigfile. Now is there a way to craft a servlet (or a .jsp) so that the Content-Disposition header is always added when the user wants to download that file? What would the code look like and what are the gotchas, if any?

    Read the article

  • Problem creating ObjectContext from different project inside solution.

    - by Levelbit
    I have two projects in my Solution. One implements my business logic and has defined entity model of entity framework. When I want to work with classes defined within this project from another project I have some problems in runtime. Actually, the most concerning thing is why I can not instantiate my, so called, TicketEntities(ObjectContext) object from other projects? when I catch following exception: The specified named connection is either not found in the configuration, not intended to be used with the EntityClient provider, or not valid. I found out it's brake at: public partial class TicketEntities : global::System.Data.Objects.ObjectContext { public TicketEntities() : base("name=TicketEntities", "TicketEntities") { this.OnContextCreated(); } with exception: Unable to load the specified metadata resource. Just to remind you everthing works fine from orginal project.

    Read the article

  • How to set HTTP Headers from client class inherited from SoapHttpClientProtocol

    - by Alfred
    I'm using a class MyClass inherited from SoapHttpClientProtocol (auto-generated in my project by creating a WebReference from a .wsdl file, representing a service). Before calling a "WebMethod" of this service, I need to custom the http header of my request. I tried overloading the GetWebRequest() method of SoapHttpClientProtocol that way : public partial class MyClass: System.Web.Services.Protocols.SoapHttpClientProtocol{ protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest)base.GetWebRequest(uri); request.Headers.Add("MyCustomHeader", "MyCustomHeaderValue"); return request; } } I was hoping that GetWebRequest was called in the constructor of MyClass, apparently it's not. Could someone help me ?

    Read the article

  • Getting Server 2008 R2 to ignore all traffic from Internet-facing NIC, leaving it to a VM

    - by Wolvenmoon
    I got in to Server 2008 R2 via Dreamspark and would like to start learning on it. I don't have much option but to put it on a system sitting between the Internet and my home LAN due to electricity bills and the fact that 3 computers in an 11x11 space in 102 degree weather is pretty stygian. Currently I use a ClearOS gateway to manage everything, what I'd like to do is take my server 2008 R2 box, which has two NICs, and drop it at the head of my network. I'd want Server 2008 R2 to ignore all traffic on the external facing NIC and pass it to a virtual ClearOS gateway, and to put all its Internet traffic through its other NIC - which will face the rest of my network and be the default gateway for it. The theory is to keep the potentially vulnerable Server 2008 R2 install as tucked behind a Linux box as possible, without sacrificing too much performance. This is a home network that occasionally hosts dedicated game servers and voice chat servers, so most malicious activity is in the form of drive by non-targeted attacks, however, I don't trust Windows Server because I don't know the OS well enough, yet. So, three questions: How do I do this, am I going to be reasonably more secure doing this than if I just let the Server 2008 R2 rig handle all the network traffic and DHCP (not an option), and should I virtualize the Server 2008 R2 rig instead and if so in what? (Core 2 Duo e6600 w/ 5 gigs usable RAM)

    Read the article

  • MVC2 Client-Side Validation for injected Ajax content

    - by radu-negrila
    Hi, I am making an Ajax call and adding content to a form inside a MVC2 app. I need to update the Client Validation Metadata with the validation for my new content. <script type="text/javascript"> //<![CDATA[ if (!window.mvcClientValidationMetadata) { window.mvcClientValidationMetadata = []; } window.mvcClientValidationMetadata.push({"Fields":[{" ... </script> Is there a way to generate this metadata for a partial view ? Thanks in advance.

    Read the article

  • DNS Server on Fedora 11

    - by Funky Si
    I recently upgraded my Fedora 10 server to Fedora 11 and am getting the following error in my DNS/named config. named[27685]: not insecure resolving 'fedoraproject.org/A/IN: 212.104.130.65#53 This only shows for certain addresses some are resolved fine and I can ping and browse to them fine, while others produce the error above. This is my named.conf file acl trusted-servers { 192.168.1.10; }; options { directory "/var/named"; forwarders {212.104.130.9 ; 212.104.130.65; }; forward only; allow-transfer { 127.0.0.1; }; # dnssec-enable yes; # dnssec-validation yes; # dnssec-lookaside . trust-anchor dlv.isc.org.; }; # Forward Zone for hughes.lan domain zone "funkygoth" IN { type master; file "funkygoth.zone"; allow-transfer { trusted-servers; }; }; # Reverse Zone for hughes.lan domain zone "1.168.192.in-addr.arpa" IN { type master; file "1.168.192.zone"; }; include "/etc/named.dnssec.keys"; include "/etc/pki/dnssec-keys/dlv/dlv.isc.org.conf"; include "/etc/pki/dnssec-keys//named.dnssec.keys"; include "/etc/pki/dnssec-keys//dlv/dlv.isc.org.conf"; Anyone know what I have set wrong here?

    Read the article

  • Isolate SQL field using regex

    - by Das123
    I'm trying to isolate a specific field in a SQL dump file so I can edit it but I'm not having any luck. The regex I'm using is: ^(?:(?:'[^\r\n']*'|[^,\r\n]*),){6}('[^\r\n']*'|[^,\r\n]*) Which is supposed to grab the seventh field and place it inside reference 1. The trouble is that this is stumbling when ever it finds a comma inside a text field and counts the partial match as the allowable matches. Eg. (1, 'Title', 1, 3, '2006-09-29', 'Commas, the bane of my regex', 'This is the target', 2, 4) matches " the bane of my regex'" instead of "'This is the target'".

    Read the article

  • Introducing Ajax support in a MyFaces (JSF) + Tomahawk application

    - by Abel Morelos
    Hi, we have a project we we are using MyFaces + Tomahawk, recently I have been requested to provide enhancements to many of the existing screens by using AJAX and provide functionality such as partial refresh. As I see, Tomahawk's components don't have special support for Ajax, so it may be a lot of work to hack Tomahawk in order to use Ajax. Now, I have seen that there are other frameworks such as Trinidad, ajax4jsf, RichFaces, etc. I'm specially interested in Trinidad since it is also a MyFaces project and it has built-in Ajax support, but I'm not still convinced about Trinidad since the other frameworks also have very promising features. Considering that I have a MyFaces+Tomahawk application, what move would you suggest to take in order to introduce Ajax support? Hack with Tomahawk or directly with JSF/MyFaces? Use Trinidad? Use/Add a different framework? Thanks.

    Read the article

  • WS2008 NTP - Using time.windows.com,0x9 - Time always skewed forwards

    - by David
    I have a domain controller configured to use time.windows.com (with 0x09 flags set). I've noticed that frequently the systems' clock is fast - it varies from 10 minutes to even 45 minutes. I always have to keep resetting the system date/time back to what it should be. When I run "w32tm /query /source" it tells me it's using time.windows.com, and obviously I trust Microsoft not to serve incorrect times, but why is my server's clock fast? EDIT: There are a few Time-Service events in the System log: Event ID: 142 Message: The time service has stopped advertising as a time source because the local clock is not synchronized. Event ID: 139 Message: The time service has started advertising as a time source. These two messages appear in pairs every hour or so. Event 142 appears 14 to 16 minutes after 139 appears. Going back a few months, these events appear: Event ID: 35 Message: The time service is now synchronizing the system time with the time source time.windows.com,0x9 (ntp.m|0x9|0.0.0.0:123-65.55.21.21:123). Event ID: 37 Message: The time provider NtpClient is currently receiving valid time data from time.windows.com,0x9 (ntp.m|0x9|0.0.0.0:123-65.55.21.21:123). Event ID: 47 Message: Time Provider NtpClient: No valid response has been received from manually configured peer time.windows.com,0x9 after 8 attempts to contact it. This peer will be discarded as a time source and NtpClient will attempt to discover a new peer with this DNS name. The error was: The time sample was rejected because: The peer is not synchronized, or it has been too long since the peer's last synchronization. These three events only appear once in the log, back in October.

    Read the article

  • Why is my RapidSSL Certificate chain is not trusted on ubuntu?

    - by olouv
    I have a website that works perfectly with Chrome & other browser but i get some errors with PHP in CLI mode so i'm investigating it, running this: openssl s_client -showcerts -verify 32 -connect dev.carlipa-online.com:443 Quite suprisingly my HTTPS appears untrusted with a Verify return code: 27 (certificate not trusted) Here is the raw output : verify depth is 32 CONNECTED(00000003) depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=20:unable to get local issuer certificate verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=27:certificate not trusted verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 So GeoTrust Global CA appears to be not trusted on the system (Ubuntu 11.10). Added Equifax_Secure_CA to try to solve this... But i get in this case Verify return code: 19 (self signed certificate in certificate chain) ! Raw output : verify depth is 32 CONNECTED(00000003) depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify error:num=19:self signed certificate in certificate chain verify return:1 depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 Edit Looks like my server does not trust/provide the Equifax Root CA, however i do correctly have the file in /usr/share/ca-certificates/mozilla/Equifax...

    Read the article

  • Global "ajax call" notification with asp.net mvc/jquery

    - by Joel Martinez
    I need to be notified any time a largeish asp.net mvc web application makes an ajax call to the server. We're using both jquery, and the built-in Ajax.* methods to do the remote calls, and I would like a global way of knowing when we make a call without having to manually inject some sort of "IsMakingCall" method to every request. The root problem we're trying to solve is session timeout. If the user leaves a page up and goes to lunch (for example), they get errors when they get back because the ajax call is returning the login page instead of the expected json or partial html result. My idea was to have a js timer which would be reset any time we make an ajax call. That way, if the timer runs out (ie. their session has now timed out) I can just auto-log them out. This is how sites like bank of america and mint.com work. Thanks!

    Read the article

  • Edit dialog, with bindings and OK/Cancel in WPF

    - by Erik
    How can i have a dialog for editing the properties of a class with binding, and have OK-Cancel in the dialog? My first idea was this: public partial class EditServerDialog : Window { private NewsServer _newsServer; public EditServerDialog(NewsServer newsServer) { InitializeComponent(); this.DataContext = (_newsServer = newsServer).Clone(); } private void ButtonClick(object sender, RoutedEventArgs e) { switch (((Button)e.OriginalSource).Content.ToString()) { case "OK": _newsServer = (NewsServer)this.DataContext; this.Close(); break; case "Cancel": this.Close(); break; } } } When in the switch, case "OK", the DataContext contains the correct information, but the originally passed NewsServer instance does not change.

    Read the article

  • Binding custom property in Entity Framework

    - by deverop
    I have an employee entity in my EF model. I then added a class to the project to add a custom property public partial class Employee { public string Name { get { return string.Format("{0} {1}", this.FirstName, this.LastName); } } } On a aspx form (inside a FormView), I want to bind a DropDownList to the employee collection: <asp:Label runat="server" AssociatedControlID="ddlManagerId" Text="ManagerId" /> <asp:DropDownList ID="ddlManagerId" runat="server" DataSourceID="edsManagerId" DataValueField="Id" DataTextField="Name" AppendDataBoundItems="true" SelectedValue='<%# Bind("ManagerId") %>'> <asp:ListItem Text="-- Select --" Value="0" /> </asp:DropDownList> <asp:EntityDataSource ID="edsManagerId" runat="server" ConnectionString="name=Entities" DefaultContainerName="Entities" EntitySetName="Employees" EntityTypeFilter="Employee" EnableFlattening="true"> </asp:EntityDataSource> Unfortunately, when I fire up the page, I get an error: DataBinding: 'System.Web.UI.WebControls.EntityDataSourceWrapper' does not contain a property with the name 'Name'. Any ideas what I'm doing wrong?

    Read the article

  • Does the DataAnnotations.DisplayAttribute.Order property not work with ASP.NET MVC 2?

    - by Zack Peterson
    I set values for the Order property of the Display attribute in my model metadata. [MetadataType(typeof(OccasionMetadata))] public partial class Occasion { private class OccasionMetadata { [ScaffoldColumn(false)] public object Id { get; set; } [Required] [DisplayName("Title")] [Display(Order = 0)] public object Designation { get; set; } [Required] [DataType(DataType.MultilineText)] [Display(Order = 3)] public object Summary { get; set; } [Required] [DataType(DataType.DateTime)] [Display(Order = 1)] public object Start { get; set; } [Required] [DataType(DataType.DateTime)] [Display(Order = 2)] public object Finish { get; set; } } } I present my models in strongly-typed views using the DisplayForModel and EditorForModel methods. <%= Html.DisplayForModel() %> and <%= Html.EditorForModel() %> But, ASP.NET MVC 2 displays the fields out of order! What might I have wrong?

    Read the article

  • multiple folder structures/views

    - by Sojourner
    Newbie. Setting up a server for a law firm. Want to set up the folder structure as follows: Client 1 Name -- Matter 1 (i.e. setting up corporation) -- Matter 2 (i.e. divorce) -- Matter 3 (i.e. setting up trust) Client 2 Name -- Matter 1 Client 3 Name -- Matter 1 and so on. But the attorneys prefer navigating a folder structure, more based on what case type: Civil -- Client 1 Name (i.e. Smythe) -- Client 2 Name (i.e. Jones) -- Client 3 Name (i.e. Johson) -- Landlord/Tenant -- Client 1 Name (i.e. Jones) -- Client 2 Name (i.e. Johson) -- Class Action Suits -- Suit 1 -- Suit 2 Personal Injury -- Client 1 Name -- Client 2 Name -- Client 3 Name Criminal -- Client 1 Name (i.e. Smythe) I'd like to know if it's possible to set up the server with the first folder structure (it's more organized and easier to employ scripts), while having the second folder structure available for users who find it easier to deal with the same types of cases grouped together.

    Read the article

  • Cannot connect on TFS 2012 server through SSL with invalid certificate

    - by DaveWut
    I saw the problem on some forums and even here, but not as specific as mine. So here's the thing, So I've configured a TFS 2012 server, on one of my personnel server at home, and now, I'm trying to make it available through the internet, with the help of apache2 on a different UNIX based, physical server. The thing is working perfectly, I don't have any problem accessing the address https://tfs.something.com/tfs through my browser. The address can be pinged and I do have access to the TFS control panel through it. How does it work? Well, with apache2 you can set a virtual host and set up the ProxyPass and ProxyPassReserver setting, so the traffic can externally comes from a secure SSL connection, through a specified domain or sub-domain, but it can be locally redirect on a clear http session on a different port. This is my current setup. As I already said, I can access the web interface, but when I'm trying to connect with Visual Studio 2012, it can't be done. Here's the error I receive: http://i.imgur.com/TLQIn.png The technical information tells me: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. My SSL certificate is invalid and was automatically generated on my UNIX server. Even if I try to add it in the Trusted Root Certification Authorities either on my TFS server or on my local workstation, it doesn't work. I still receive the same error. Is there's a way to completely ignore certificate validation? If not, what's have I done? I mean, I've added the certificate in the trusted root certificates, it should works as mentioned on some forums... If you need more information, please ask me, I'll be pleased to provide you more. Dave

    Read the article

  • How do i tell if my drivers are up to date on Acer?

    - by joe
    Hoping some kind souls can help me out ? I got a blue screen the other day after trying to load sandboxie. So its obviously conflicting with something. I checked if my drivers were up to date on my acer aspire one AOD270 on this intel based site; http://www.drivermanager.com/en/down...tel&Logo=intel Its showing i have 2 drivers that need updating ; Intel NM10 Express chipset and the Realtek PCIE Cardreader. I have no idea whether to do the update via the Intel Driver update site or the Acer drivers download page? I then ran Bluescreenview and on the dump file its showing ; ''caused by driver'' igdkmd32.sys ''file description'' Intel (R) WDDM Kernel mode driver ''product name''Intel Graphics Accelerator Drivers for Windows 7(R) I bought the laptop here in SE Asia about a year ago. The ''HOT!! NEW download tool'' on the acer drivers site (below) doesnt seem to work and the info about removing and installing drivers is limited. Not sure what to trust on non acer/manufacturer sites. http://support.acer.com/us/en/produc...1&modelId=4040 I've located the igdkmd32.sys file inside the INTEL GRAPHICS MEDIA ACCELERATOR 3600 SERIES 8.14.8.1064. When i click on ''update driver'' in control panel it searches and says its up to date. In windows maintenance it says this intel had a problem, but no solution. For all i know my drivers could be up to date and its something else. Can anybody advise a dummy step by step the process i should follow ? I've never done this before. eg do i delete the old driver first and then download the new one.how much of a problem i could cause by downloading this type of thing wrongly? As yet i havent downloaded any drivers. I've asked on other forums but no luck as yet. Thanks for any help!

    Read the article

  • ssh keys rejected each day

    - by EddyR
    I've had OpenSSH server running on my debian server for a couple weeks and all of a sudden now when I go to login the next day it rejects my ssh key and I have to manually add a new one each time. Not only that but I have the "tunneling with clear-text passwords" option enabled and the non-root (login with root is disabled) account for that is rejected too. I'm at a loss why this is happening and I can't find any ssh options that would explain it. --update-- I just changed debug level to DEBUG. But before that I'm seeing a lot of the following in auth.log Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session opened for user root by (uid=0) Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session closed for user root ... Feb 1 04:36:26 greenpages sshd[7217]: reverse mapping checking getaddrinfo for nat-pool-xx-xx-xx-xx.myinternet.net [xx.xx.xx.xx] failed - POSSIBLE BREAK-IN ATTEMPT! ... Feb 1 04:37:31 greenpages sshd[7223]: Did not receive identification string from xx.xx.xx.xx ... My sshd_conf file settings are: # Package generated configuration file # See the sshd(8) manpage for details # What ports, IPs and protocols we listen for Port xxx # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel DEBUG # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM no ClientAliveInterval 60 AllowUsers myuser

    Read the article

  • Design pattern for parsing data that will be grouped to two different ways and flipped

    - by lewisblackfan
    I'm looking for an easily maintainable and extendable design model for a script to parse an excel workbook into two separate workbooks after pulling data from other locations like the command line, and a database. The high level details are as follows. I need to parse an excel workbook containing a sheet that lists unique question names, the only reliable information that can be parsed from the question name is the book code that identifies the title and edition of the textbook the question is associated with, the rest of the question name is not standardized well enough to be reliably parsed by computer. The general form of the question name is best described by the following regular expression. '^(\w+)\s(\w{1,2})\.(\w{1,2})\.(\w{1,3})\.(\w{1,3}\.)*$' The first sub-pattern is the book code, the second sub-pattern is 90% of the time the chapter, and the rest of the sub-patterns could be section, problem type, problem number, or question type information. There is no simple logic, at least not one I can find. There will be a minimum of three other columns in this spreadsheet; one column will be the chapter the question is associated with, the second will be the section within the chapter the question is associated with, and the third will be some kind of asset indicated by a uniform resource locator. 1 | 1 | qname1 | url | description | url | description ... 1 | 1 | qname2 | url | description 1 | 1 | qname3 | url | description | url | description | url | The asset can be indicated by a full or partial uniform resource locator, the partial url will need to be completed before it can be fed into the application. There theoretically could be no limit to the number of asset columns, the assets will be grouped in columns by type. Some times additional data will have to be retrieved from a database or combined with the book code before the asset url is complete and can be understood by the application that will be using the asset. The type is an abstraction, there are eight types right now, each with their own logic in how the uniform resource locator is handled and or completed, and I have to add a new type and its logic every three or four months. For each asset url there is the possibility of a description column, a character string for display in the application, but not always. (I've already worked out validating the description text, and squashing MSs obscure code page down to something 7-bit ascii can handle.) Now that all the details are filled-in I can get to the actual problem of parsing the file. I need to split the information in this excel workbook into two separate workbooks. The first workbook will group all the questions by section in rows. With the first cell being the section doublet and the rest of the cells in the row are the question names. 1.1 | qname1 | qname2 | qname3 | qname4 | 1.2 | qname1 | qname2 | qname3 | 1.3 | qname1 | qname2 | qname3 | qname4 | qname5 There is no set number of questions for each section as you can see from the above example. The second workbook is more complicated, there is one row per asset, and question names that have more than one asset will be duplicated. There will be four or five columns on this sheet. The first is the question name for the asset, the second is a media type used to select the correct icon for the asset in the application, the third is string representing the asset type, the four is the full and complete uniform resource locator for the asset, and the fifth columns is the optional text description for the asset. q1 | mtype1 | atype1 | url | description q1 | mtype2 | atype2 | url | description q1 | mtype2 | atype3 | url | description q2 | mtype1 | atype1 | url | description q2 | mtype2 | atype3 | url | description For the original six types I did have a script that parsed the source excel workbook into the other two excel workbooks, and I was able to add two more types until I ran aground on the implementation of the ninth type and tenth types. What broke my script was the fact that the ninth type is actually a sub-type of one of the original six, but with entirely different logic, and my mostly procedural script could not accommodate without duplicating a lot of code. I also had a lot of bugs in the script and will be writing the test first on this time around. I'm stuck with the format for the resulting two workbooks, this script is glue code, development went ahead with the project without bothering to get a complete spec from the sponsor. I work for the same company as the developers but in the editorial department, editorial is co-sponsor of the project, and am expected to fix pesky details like this (I'm foaming at the mouth as I type this). I've tried factories, I've tried different object models, but each resulting workbook is so different when I find a design that works for generating one workbook the code is not really usable for generating the other. What I would really like are ideas about a maintainable and extensible design for parsing the source workbook into both workbooks with maximum code reuse, and or sympathy.

    Read the article

  • New virtualization project and old SAN

    - by Chris
    Hi, We'll start shortly a partial virtualization of our infrastructure and consolidate a dozen servers into virtuals instances. We'll also add some client application virtualization into the mix for good measure. Two HP DL 380 with the new xeons 56xx and 96 GB of memory each running xenserver + xenapp will then take charge of most of our IT needs. So far, so good. One element that is missing from the picture is the storage part. We need some sort of shared storage to enable live motion and other HA features. We have an IBM DS 4300 SAN that we can use for that. But since it's in production since 2005, I'm not sure about such a critical role for a 5yr old part. So my question is: What is the reliability of this kind of equipment after 5 yr ? Can it last 10 yr with no or few problems ? Since our budjet is tight, not buying another SAN will be a big plus. This lead me to another question: FC disks cost an arm and a leg from IBM. When I type the replacement # in google (for example IBM 300GB 15K 4GBPS FC HDD 42D0410), I can find it at a fraction of the price at various sites. So am I stupid to buy from IBM or naive to trust 3rd party reseller ?? Thanks, Chris

    Read the article

  • How to parse malformed HTML in python, using standard libraries

    - by bukzor
    There are so many html and xml libraries built into python, that it's hard to believe there's no support for real-world HTML parsing. I've found plenty of great third-party libraries for this task, but this question is about the python standard library. Requirements: Use only Python standard library components (I'm currently using v2.6) DOM support Handle HTML entities (&nbsp;) Handle partial documents (like: Hello, <iWorld</i!) Bonus points: XPATH support Handle unclosed/malformed tags. (<bigdoes anyone here know <html ???

    Read the article

  • Dynamicdata Validation Exception Message Caught in JavaScript, not DynamicValidator

    - by Perplexed
    I have a page here with a few list views on it that are all bound to Linq data sources and they seem to be working just fine. I want to add validation such that when a checkbox (IsVoid on the object) is checked, comments must be entered (VoidedComments on the object). Here's the bound object's OnValidate method: partial void OnValidate(ChangeAction action) { if (action == ChangeAction.Update) { if (_IsVoid) { string comments = this.VoidedComments; if (string.IsNullOrEmpty(this._VoidedComments)) { throw new ValidationException("Voided Comments are Required to Void an Error"); } } } } Despite there being a dynamic validator on the page referencing the same ValidationGroup as the dynamic control, when the exception fires, it's caught in JavaScript and the debugger wants to break in. The message is never delivered to the UI as expected. Any thoughts as to What's going on?

    Read the article

  • COM port cannot be opened in asp.net

    - by Pandiya Chendur
    I following this article for sending SMS it is a winform application.. I have referenced all the Dll's to my asp.net application..... I use an aspx page to detect a mobile device connected to a PC..... But it alwys shows COM 'n' Port could not be opened..... using SMS; using GsmComm.GsmCommunication; using GsmComm.PduConverter; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { GsmCommMain comm = new GsmCommMain(6, 9600, 300); comm.Open(); if (!comm.IsConnected()) { Response.Write("No Phone Connected"); } else { SmsSubmitPdu pdu = new SmsSubmitPdu("test", "+919999999999", ""); CommSetting.comm.SendMessage(pdu); } } }

    Read the article

  • Extrapolation using fft in octave

    - by CFP
    Using GNU octave, I'm computing a fft over a piece of signal, then eliminating some frequencies, and finally reconstructing the signal. This give me a nice approximation of the signal ; but it doesn't give me a way to extrapolate the data. Suppose basically that I have plotted three periods and a half of f: x -> sin(x) + 0.5*sin(3*x) + 1.2*sin(5*x) and then added a piece of low amplitude, zero-centered random noise. With fft/ifft, I can easily remove most of the noise ; but then how do I extrapolate 3 more periods of my signal data? (other of course that duplicating the signal). The math way is easy : you have a decomposition of your function as an infinite sum of sines/cosines, and you just need to extract a partial sum and apply it anywhere. But I don't quite get the programmatic way... Thanks!

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >