Search Results

Search found 3574 results on 143 pages for 'difficult'.

Page 129/143 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • apache2 namevirtualhost resolving wrong site

    - by joe
    Running apache 2.2.6. I'm setting up a development environment. dev and production will be hosted on the same machine, same IP address. DNS entries like prod.domain.com and dev.domain.com point to the same IP. * Imprortant: it is required that dev and prod are otherwise completely separate. Each will run it's own apache instance. Each will use it's own apache configuration. Each, prod and dev, will host http and https. I have this set up and working, but not as restrictive as I'd like. For instance, the production config: NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80 > ServerName prod.domain.com # ... etc </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com # ... etc </VirtualHost> The dev site is set up similarly, using ports 8080 and 4443. Each site works fine. But assuming both apaches are running, one can also hit "cross-site" by mistake. So, inadvertently hitting prod.domain.com:8080 successfully returns a page from the dev site. It would be much better if this failed completely. This is a bit more difficult to solve (for me) because of the need for two apache configs. If all in one, the single process would have full knowledge of everything. So, I tried to solve this with brute force, including virtual hosts for the "other" site, with something that would fail, like no access to documentroot. But apache then inexplicably finds the "wrong" virtual host. Here's the full config for production, with the dummy dev configs. NameVirtualHost *:80 NameVirtualHost *:443 # ---------------------------------------------- # DUMMY HOSTS <VirtualHost *:8080 > ServerName dev.domain.com:8080 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:4443 > ServerName dev.domain.com:4443 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> # ---------------------------------------------- # REAL PRODUCTION HOSTS <VirtualHost *:80 > ServerName prod.domain.com:80 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com:443 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> # .... other valid ssl setup </VirtualHost> Here's the strange thing. With this configuration, a prod.domain.com:80 hit succeeds. But a prod.domain.com:443 hit fails, because it finds the dev.domain.com:4443 instead. I've also tried removing the port from the ServerName, but it still doesn't work. Sorry for the long question. Hopefully this is enough information. Thanks in advance for any help.

    Read the article

  • BSOD & System Failure after trying to install a new RAM

    - by Praveen Kumar
    I have updated the question with sections, so that people won't find it difficult to read. Basic System Information Let me give a basic introduction on my system. I have a system of following configuration: Processor: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz 3.40GHz RAM: Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9) x 2 OS: Windows 7 Ultimate, SP1 Build 7601 HDD: 1 TB Seagate 7200 RPM The Problem It was working fine for about an year. Yesterday I planned to increase my RAM to 16 GB by putting another set of two Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9). I got it from an authorized reseller and also, the RAM was fitted by a service engineer only. After the RAM was fit (all the four), the system failed to start, with an error code of 0x000000f4. The complete information of it is: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 16393 Additional information about the problem: BCCode: f4 BCP1: 0000000000000003 BCP2: FFFFFA8008A39060 BCP3: FFFFFA8008A39340 BCP4: FFFFF800037C8510 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\093012-13041-01.dmp C:\Users\Praveen Kumar\AppData\Local\Temp\WER-30716-0.sysdata.xml Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Another Problem We first thought that it was the RAM, which caused the issue. So I returned the RAMs and now my computer configuration is exactly how it was the previous day. But, following the removal of the RAM, I also had several crashes after that. One suspicious thing was with an error code c0000134: STOP: c0000135 The program can’t start because %hs is missing from your computer . Try resintalling the program to fix this problem. After reading contents from this, this and this, which were never my case, they didn't help me. But I didn't receive any more STOP c0000134 messages. But this 0x000000f4 keeps on coming. I am writing from the same system and it allows me to work for say, half an hour max. Then I hear a device disconnect sound, the one you hear in Windows 7, when a USB Mass Storage Device is plugged out. Immediately following that, my screen goes blank and I get 0x000000f4 blue screen. Okay, now I am really concerned about my Hard Disk data, but I have no clue if there is a problem with the HDD. My Question What all files do I need to submit for your reference? Can this issue be fixed? I am getting more time if I remove my RAM, clean it and then put it back. Weird! Hope I have given the necessary information to help you guys. Thanks in advance. Minidumps I have uploaded all the Minidump DMP files from C:\Windows\Minidump folder here: http://www.praveen-kumar.com/Minidumps.zip Let me know if you face any issues in accessing it. Will be able to share elsewhere. Updates 30-Sep-2012 10:15 AM IST: When I keep the system cover opened, pressed the HDD Cable well, it is allowing me to be on for about half an hour, I guess? Also, I feel that the CPU fan speed is kind of slow. It rotates at around 900 RPM, but the CPU Temperature is not more than 70° C. 30-Sep-2012 10:30 AM IST: My Modem (Beetel 220BX ADSL2+ Router) failed. I have no idea how it is related to this issue, but I thought that I need to document this too. I really have a bad day here. 30-Sep-2012 11:00 AM IST: System still running fine, with the cabinet cover open, now for about an hour. 30-Sep-2012 12:00 PM IST: I shut down the system and closed the cabinet. Started the system, and it hung after giving the password. After a few minutes, got the same 0x000000f4 error. So, while it is in the upright position, fixed the Hard Disk cable and now it is booting fine. Waiting for more observations and answers.

    Read the article

  • Single-Signon options for Exchange 2010

    - by freiheit
    We're working on a project to migrate employee email from Unix/open-source (courier IMAP, exim, squirrelmail, etc) to Exchange 2010, and trying to figure out options for single-signon for Outlook Web Access. So far all the options I've found are very ugly and "unsupportable", and may simply not work with Forefront. We already have JA-SIG CAS for token-based single-signon and Shibboleth for SAML. Users are directed to a simple in-house portal (a Perl CGI, really) that they use to sign in to most stuff. We have an HA OpenLDAP cluster that's already synchronized against another AD domain and will be synchronized with the AD domain Exchange will be using. CAS authenticates against LDAP. The portal authenticates against CAS. Shibboleth authenticates with CAS but pulls additional data from LDAP. We're moving in the direction of having web services authenticate against CAS or Shibboleth. (Students are already on SAML/Shibboleth authenticated Google Apps for Education) With Squirrelmail we have a horrible hack linked to from that portal page that authenticates against CAS, gets your original plaintext password (yes, I know, evil), and gives you an HTTP form pre-filled with all the necessary squirrelmail login details with javaScript onLoad stuff to immediately submit the form. Trying to find out exactly what is possible with Exchange/OWA seems to be difficult. "CAS" is both the acronym for our single-signon server and an Exchange component. From what I've been able to tell there's an addon for Exchange that does SAML, but only for federating things like free/busy calendar info, not authenticating users. Plus it costs additional money so there's no way to experiment with it to see if it can be coaxed into doing what we want. Our plans for the Exchange cluster involve Forefront Threat Management Gateway (the new ISA) in the DMZ front-ending the CAS servers. So, the real question: Has anybody managed to make Exchange authenticate with CAS (token-based single-signon) or SAML, or with something I can reasonably likely make authenticate with one of those (such as anything that will accept apache's authentication)? With Forefront? Failing that, anybody have some tips on convincing OWA Forms Based Authentication (FBA) into letting us somehow "pre-login" the user? (log in as them and pass back cookies to the user, or giving the user a pre-filled form that autosubmits like we do with squirrelmail). This is the least-favorite option for a number of reasons, but it would (just barely) satisfy our requirements. From what I hear from the guy implementing Forefront, we may have to set OWA to basic authentication and do forms in Forefront for authentication, so it's possible this isn't even possible. I did find CasOwa, but it only mentions Exchange 2007, looks kinda scary, and as near as I can tell is mostly the same OWA FBA hack I was considering slightly more integrated with the CAS server. It also didn't look like many people had had much success with it. And it may not work with Forefront. There's also "CASifying Outlook Web Access 2", but that one scares me, too, and involves setting up a complex proxy config, which seems more likely to break. And, again, doesn't look like it would work with Forefront. Am I missing something with Exchange SAML (OWA Federated whatchamacallit) where it is possible to configure to do user authentication and not just free/busy access authorization?

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • Intel Rapid Storage / Smart Response SSD caching issue

    - by goober
    Background Recently built my own PC. It works! Almost. It's been a while since getting into the guts of these things, so I'm familiar but may be missing something simple. FYI, I don't care about blowing the OS away -- it's brand new and we can go back from scratch as many times as necessary. Goal / Issue I'd like to use the SSD to take advantage of Intel's Smart Response technology (allows the SSD to act as a cache for HDDs) I would like the SSD cache to act as a cache for my HDDs, which I would like to be in a RAID1 array (so I get the speed from the SSD and the redundancy from the RAID1) However, Windows only sees the drive in device manager (not as a drive), so I'm unsure what to do about that. Related: as far as I know, for this to work, the drives all have to be in a single RAID array (i.e. a RAID0 pairing of the SSD and the RAID1 HDD array). However, when attempting this at the BIOS level, I am told there is not enough space for an array. Steps so Far Moved the SSD onto the Intel controller (I'd had it on the Marvel 6.0 controller instead of the Intel controller, so the BIOS was only seeing it in a strange way) Updated the BIOS of the motherboard to the latest version Reinstalled Intel's RST (iRST?) software several times, as some forums reported it working after reinstalling 3 times (which does not inspire confidence). Checked Intel storage: it does see the SSD as a physical, non-RAID device. However, it says no space exists if I try to create an array. Checked the BIOS: it does not show up in the boot order, but is an option that can be selected under boot options. Tried the firmware update for that model. Issue: the firmware CD doesn't detect a drive; maybe the Intel storage controller is making it difficult? moved the ssd to the marvel controller. The firmware update cd appeared to hang while searching for drives. swapped out the SATA cable for the manufacturer's and moved back to the intel storage controller. Noticed at this point that in the Intel RST software, a device DOES show up in addition to the RAID set -- only shown as a "60 GB internal disk". Windows doesn't appear to see it as a drive, but it does still show in device manager. Move SSD to port from 0-3 on MOBO and set SATA mode to IDE (after disconnecting RAID1 config) to allow the firmware update to work. Firmware was already at the latest version. Next Steps ? Components involved ASUS P8Z68-V PRO motherboard (Intel Z68 Chipset) Intel i7 2600k Processor 2 x 1TB 7200 RPM HDDs 64 GB Crucial M4 SSD (M4-CT064M4SSD2) For Reference -- Storage Configuration Intel 3 gbps Intel 3gbps Intel 6gbps Marvel 6gbps +----------+ +----------+ +----------+ +----------+ | | <----+ | | +-+ | | | |----------| | |----------| |-|--------| |----------| | | | | + | | | | | | +----------+ | +--|-------+ +-|--------+ +----------+ | | | + v v | 1 TB HDD 64 GB SSD + +> 1 TB HDD For Reference -- Intel RST (v10.8.0.1003) Screenshot Don't mind the "rebuilding" -- knocked a power cable out at one point; it's doing its job, not an indicator of a bad HDD. Any thoughts? Thanks in advance for any help!

    Read the article

  • Site-to-site VPN using MD5 instead of SHA and getting regular disconnection

    - by Steven
    We are experiencing some strange behavior with a site-to-site IPsec VPN that goes down about every week for 30 minutes (Iam told 30 minutes exactly). I don't have access to the logs, so it's difficult to troubleshoot. What is also strange is that the two VPN devices are set to use SHA hash algorithm but apparently end up agreeing to use MD5. Does anybody have a clue? or is this just insufficient information? Edit: Here is an extract of the log of one of the two VPN devices, which is a Cisco 3000 series VPN concentrator. 27981 03/08/2010 10:02:16.290 SEV=4 IKE/41 RPT=16120 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 27983 03/08/2010 10:02:56.930 SEV=4 IKE/41 RPT=16121 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 27986 03/08/2010 10:03:35.370 SEV=4 IKE/41 RPT=16122 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) [… same continues for another 15 minutes …] 28093 03/08/2010 10:19:46.710 SEV=4 IKE/41 RPT=16140 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 28096 03/08/2010 10:20:17.720 SEV=5 IKE/172 RPT=1291 xxxxxxxx Group [xxxxxxxx] Automatic NAT Detection Status: Remote end is NOT behind a NAT device This end IS behind a NAT device 28100 03/08/2010 10:20:17.820 SEV=3 IKE/134 RPT=79 xxxxxxxx Group [xxxxxxxx] Mismatch: Configured LAN-to-LAN proposal differs from negotiated proposal. Verify local and remote LAN-to-LAN connection lists. 28103 03/08/2010 10:20:17.820 SEV=4 IKE/119 RPT=1197 xxxxxxxx Group [xxxxxxxx] PHASE 1 COMPLETED 28104 03/08/2010 10:20:17.820 SEV=4 AUTH/22 RPT=1031 xxxxxxxx User [xxxxxxxx] Group [xxxxxxxx] connected, Session Type: IPSec/LAN- to-LAN 28106 03/08/2010 10:20:17.820 SEV=4 AUTH/84 RPT=39 LAN-to-LAN tunnel to headend device xxxxxxxx connected 28110 03/08/2010 10:20:17.920 SEV=5 IKE/25 RPT=1291 xxxxxxxx Group [xxxxxxxx] Received remote Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28113 03/08/2010 10:20:17.920 SEV=5 IKE/24 RPT=88 xxxxxxxx Group [xxxxxxxx] Received local Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28116 03/08/2010 10:20:17.920 SEV=5 IKE/66 RPT=1290 xxxxxxxx Group [xxxxxxxx] IKE Remote Peer configured for SA: L2L: 1A 28117 03/08/2010 10:20:17.930 SEV=5 IKE/25 RPT=1292 xxxxxxxx Group [xxxxxxxx] Received remote Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28120 03/08/2010 10:20:17.930 SEV=5 IKE/24 RPT=89 xxxxxxxx Group [xxxxxxxx] Received local Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28123 03/08/2010 10:20:17.930 SEV=5 IKE/66 RPT=1291 xxxxxxxx Group [xxxxxxxx] IKE Remote Peer configured for SA: L2L: 1A 28124 03/08/2010 10:20:18.070 SEV=4 IKE/173 RPT=17330 xxxxxxxx Group [xxxxxxxx] NAT-Traversal successfully negotiated! IPSec traffic will be encapsulated to pass through NAT devices. 28127 03/08/2010 10:20:18.070 SEV=4 IKE/49 RPT=17332 xxxxxxxx Group [xxxxxxxx] Security negotiation complete for LAN-to-LAN Group (xxxxxxxx) Responder, Inbound SPI = 0x56a4fe5c, Outbound SPI = 0xcdfc3892 28130 03/08/2010 10:20:18.070 SEV=4 IKE/120 RPT=17332 xxxxxxxx Group [xxxxxxxx] PHASE 2 COMPLETED (msgid=37b3b298) 28131 03/08/2010 10:20:18.750 SEV=4 IKE/41 RPT=16141 xxxxxxxx Group [xxxxxxxx] IKE Initiator: New Phase 2, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 28135 03/08/2010 10:20:18.870 SEV=4 IKE/173 RPT=17331 xxxxxxxx Group [xxxxxxxx] NAT-Traversal successfully negotiated! IPSec traffic will be encapsulated to pass through NAT devices.

    Read the article

  • How Hacker Can Access VPS CentOS 6 content?

    - by user2118559
    Just want to understand. Please, correct mistakes and write advices Hacker can access to VPS: 1. Through (using) console terminal, for example, using PuTTY. To access, hacker need to know port number, username and password. Port number hacker can know scanning open ports and try to login. The only way to login as I understand need to know username and password. To block (make more difficult) port scanning, need to use iptables configure /etc/sysconfig/iptables. I followed this https://www.digitalocean.com/community/articles/how-to-setup-a-basic-ip-tables-configuration-on-centos-6 tutorial and got *nat :PREROUTING ACCEPT [87:4524] :POSTROUTING ACCEPT [77:4713] :OUTPUT ACCEPT [77:4713] COMMIT *mangle :PREROUTING ACCEPT [2358:200388] :INPUT ACCEPT [2358:200388] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [2638:477779] :POSTROUTING ACCEPT [2638:477779] COMMIT *filter :INPUT DROP [1:40] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [339:56132] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 110 -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -s 11.111.11.111/32 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 21 -j ACCEPT -A INPUT -s 11.111.11.111/32 -p tcp -m tcp --dport 21 -j ACCEPT COMMIT Regarding ports that need to be opened. If does not use ssl, then seems must leave open port 80 for website. Then for ssh (default 22) and for ftp (default 21). And set ip address, from which can connect. So if hacker uses other ip address, he can not access even knowing username and password? Regarding emails not sure. If I send email, using Gmail (Send mail as: (Use Gmail to send from your other email addresses)), then port 25 not necessary. For incoming emails at dynadot.com I use Email Forwarding. Does it mean that emails “does not arrive to VPS” (before arriving to VPS, emails are forwarded, for example to Gmail)? If emails does not arrive to VPS, then seems port 110 also not necessary. If use only ssl, must open port 443 and close port 80. Do not understand regarding port 3306 In PuTTY with /bin/netstat -lnp see Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 992/mysqld As understand it is for mysql. But does not remember that I have opened such port (may be when installed mysql, the port is opened automatically?). Mysql is installed on the same server, where all other content. Need to understand regarding port 3306 2. Also hacker may be able access console terminal through VPS hosting provider Control Panel (serial console emergency access). As understand only using console terminal (PuTTY, etc.) can make “global” changes (changes that can not modify with ftp). 3. Hacker can access to my VPS exploiting some hole in my php code and uploading, for example, Trojan. Unfortunately, faced situation that VPS was hacked. As understand it was because I used ZPanel. On VPS ( \etc\zpanel\panel\bin) ) found one php file, that was identified as Trojan by some virus scanners (at virustotal.com). Experimented with the file on local computer (wamp). And appears that hacker can see all content of VPS, rename, delete, upload etc. From my opinion, if in PuTTY use command like chattr +i /etc/php.ini then hacker could not be able to modify php.ini. Is there any other way to get into VPS?

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post.

    Read the article

  • Squid - Logging to MySQL without empty rows/skipped records?

    - by Lee Ward
    I'm trying to figure out how to make Squid proxy log to MySQL. I know ACL order is pretty important but I'm not sure if I understand exactly what ACLs are or do, it's difficult to explain, but hopefully you'll see where I'm going with this as you read! I have created the lines to make Squid interact with a helper in squid.conf as follows: external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log The external ACL helper (mysql_lg.php) is a PHP script and is as follows: error_reporting(0); if (! defined(STDIN)) { define("STDIN", fopen("php://stdin", "r")); } $res = mysql_connect('localhost', 'squid', 'testsquidpw'); $dbres = mysql_select_db('squid', $res); while (!feof(STDIN)) { $line = trim(fgets(STDIN)); $fields = explode(' ', $line); $user = rawurldecode($fields[0]); $cli_ip = rawurldecode($fields[1]); $protocol = rawurldecode($fields[2]); $uri = rawurldecode($fields[3]); $q = "INSERT INTO logs (id, user, cli_ip, protocol, url) VALUES ('', '".$user."', '".$cli_ip."', '".$protocol."', '".$uri."');"; mysql_query($q) or die (mysql_error()); if ($fault) { fwrite(STDOUT, "ERR\n"); }; fwrite(STDOUT, "OK\n"); } The configuration I have right now looks like this: ## Authentication Handler auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 30 auth_param negotiate program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param negotiate children 5 # Allow squid to update log external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log acl localnet src 172.16.45.0/24 acl AuthorizedUsers proxy_auth REQUIRED acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl CONNECT method CONNECT acl blockeddomain url_regex "/etc/squid3/bl.acl" http_access deny blockeddomain deny_info ERR_BAD_GENERAL blockeddomain # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # Allow the internal network access to this proxy http_access allow localnet # Allow authorized users access to this proxy http_access allow AuthorizedUsers # FINAL RULE - Deny all other access to this proxy http_access deny all From testing, the closer to the bottom I place the logging lines the less it logs. Oftentimes, it even places empty rows in to the MySQL table. The file-based logs in /var/log/squid3/access.log are correct but many of the rows in the access logs are missing from the MySQL logs. I can't help but think it's down to the order I'm putting lines in because I want to log everything to MySQL, unauthenticated requests, blocked requests, which category blocked a specific request. The reason I want this in MySQL is because I'm trying to have everything managed via a custom web-based frontend and want to avoid using any shell commands and access to system log files if I can help it. The end result is to make it as easy as possible to maintain without keeping staff waiting on the phone whilst I add a new rule and reload the server! Hopefully someone can help me out here because this is very much a learning experience for me and I'm pretty stumped. Many thanks in advance for any help!

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post. UPDATES Given the first answers, I will strongly suggest the person paying the hardware to insert a second HD. Better 2x2TB than 1x3TB (3 TB is oversized anyway). I was trying to keep the initial costs down to spread them over a longer period, but better having something good from the beginning.

    Read the article

  • I am starting to think that Prevx.com isnt a legit site...but heres my long-winded question

    - by cop1152
    I apologize in advance for the long-winded post. I posted it all because I believe its informative and may be useful. Also, I posted my question at the end. Moments ago I was RDC to a file server in my home (from inside my home). I had opened Firefox and Googled for a manufacturers website. Immediately after clicking the link, Firefox abruptly closed. This seemed odd to me to so I checked the running processes and discovered d.exe, e.exe, and f.exe running. I Googled these processes on a different machine and found them belonging to a key-logger/screen-capturer/trojan called defender.exe, which according to the Prevx lives in c:\documents and settings\user\local settings\temp. (Prevx link http://www.prevx.com/filenames/147352809685142526-X1/DEFENDER32.EXE.html) Simultaneously, an obviously-spoofed Windows Firewall popup appeared on the server asking me to click ‘yes’ to update Windows Firewall. At this time I ended all rogue processes, emptied the temp folder, removed defender.exe from startup, and checked my registry and a few other locations. Before deleting Defender.exe I noted that it was created moments ago, just before Firefox crashed. I believe that I was ‘almost’ infected with this malware. I believe that it needed me to click the phony popup in order to complete infection because it wasn’t allowed to execute processes from the temp folder. After cleaning the machine, I restarted it and have been monitoring it for over an hour. I am debating on whether or not to restore the Windows partition (a separate physical drive from the data) or to just watch it for awhle. I should mention that, because of the specs on this machine, I do not run antivirus software, but I know it well and inspect it regularly. It is a very old Compaq with a 400mhz processer and 512mb of ram. I have a static IP and the server is in the DMZ running an FTP client and some HTTP server software. All files transferred to and stored on this machine are scanned for malware before transferring. Usually the machine only runs 19 processes and performs pretty well for its intended purpose. I posted the story so that you could be aware of a possible new piece of malware and how it acts, but I also have a question or two. First, over the last few months I have noticed that PREVX is listed at the top of most of my Google searches when researching malware, especially for new or obscure malware…and they always want you to purchase something. I don’t think they are one of the top AV companies, so it seems odd that they are always the top Google result. Does anyone have any experience with any of their products? Also, what sites do you rely on for malware researching? Recently, I have found it difficult to find good info because of HijackThis-logs and other deadend info cluttering up my searches. And lastly, besides antivirus, third-party firewall, etc, what settings would you use to lock down a machine to make it more secure in instances where a stubborn admin like myself refuses to run AV? Thanks.

    Read the article

  • A failed disk (Pay for professional service or SpinRite?)(new edit)

    - by huggie
    EDIT: After much negotiating and begging and seeing through promotion smoke screen, thanks to the nice representative who took my case, I now know that the engineer has already fixed my NTFS partition (I guess it might be a bad block in the partition table?). She told me that the problem was considered minor, and I should be able to boot normally and just copy stuff out. Whew..I'm glad I didn't agree to the NTD $16,000 deal. New question (should this be in a new thread?): is it safer to use the linux "dd" command or is it better to boot normally into Windows XP and just copy stuff out? EDIT2: Thanks to all the help. I give the best answer to Console as it's most directed related to my question. But many suggestion are helpful and informational. ---- ORIGINAL POST BELOW --- Hi, in my previous post (You don't need to read but it's at http://superuser.com/questions/48838/windows-xp-a-disk-read-error-occurred), I said that my hard disk was not booting and is showing "a disk read error occurred". I took it to a recovery professional. A representative responded today told me that the NTFS partitions have a "NTFS partition system crash". I have no idea what that means. The engineer handling my drive will not be available for contact till tomorrow. Now the company charges me NTD (New Taiwan Dollar) $16,000 to recover lost data, that's kind of a lot considering that my graduate student monthly stipend is currently NTD $32,000 (max. allowed by regulation, may be lower, may change depend on funding). Now I'm weighting in between the options. Option A: let the professional recovers it with the half of my monthly stipend. If file/directories I designated are not recovered I don't pay a penny. (other than the initial examination fee of NTD $1000 which I've already paid.) Option B: let me try SpinRite, if failed, back to Option A. I spoke to the representative at the company they recommended me not to handle it on my own (yeah of course that's what they all want to say, right?), and at the price tag the disk error is probably relatively minor and data recoverable. But the representative really did not have detailed information of the disk failure so I didn't take her recommendation readily. Though one thing I heed was that she said that what they would do is to duplicate the disk before attempting discovery, so there would be no data loss (Is this true? can't duplicating invoke further data loss?). That sounds very good to me. Or maybe a third option: Option C: Negotiate with them to pay them to duplicate the disk hopefully for a much smaller price tag. Let me try SpinRite, if failed, back to Option A. This is a difficult decision. Ultimately I want my data back, but if a cheaper way is available to achieve the same thing... Can operating with SpinRite also corrupt data in someway? I've no idea what happened to my drive. I'll attempt to contact the engineer and hope to get it clarified and make an edit here.

    Read the article

  • ASP Classic Session Variable Not Always Getting Set

    - by Mike at KBS
    I have a classic ASP site I'm maintaining and recently we wanted to do special rendering actions if the visitor was coming from one specific set of websites. So in the page where those visitors have to come through to get to our site, I put a simple line setting a session variable: <!-- #include virtual="/clsdbAccess.cs"--> <% set db = new dbAccess %> <html> <head> ... <script language="javascript" type="text/javascript"> ... </script> </head> <body> <% Session("CAME_FROM_NEWSPAPER") = "your local newspaper" ... %> ... html stuff ... </body> </html> Then, in all the following pages, whenever I need to put up nav links, menus, etc., that we don't want to show to those visitors, I test for if that session variable is "" or not, then render accordingly. My problem is that it seems to work great for me in development, then I get it out into Production and it works great sometimes, and other times it doesn't work at all. Sometimes the session variable gets set, sometimes it doesn't. Everything else works great. But our client is seeing inconsistent results on pages rendered to their visitors and this is a problem. I've tried logging on from different PC's and I confirm that I get different, and unpredictable, results. The problem is I can't reproduce it at will, and I can't troubleshoot/trace Production. For now I've solved this by just creating duplicate pages with the special rendering, and making sure those visitors go there. But this is a hack and will eventually be difficult to maintain. Is there something special or inconsistent about session variables in Classic ASP? Is there a better way to approach the problem? TIA.

    Read the article

  • iOS static Framework crash when animating view

    - by user1439216
    I'm encountering a difficult to debug issue with a static library project when attempting to animate a view. It works fine when debugging (and even when debugging in the release configuration), but throws an error archived as a release: Exception Type: EXC_CRASH (SIGSYS) Exception Codes: 0x00000000, 0x00000000 Crashed Thread: 0 Thread 0 name: Dispatch queue: com.apple.main-thread Thread 0 Crashed: 0 TestApp 0x000d04fc 0x91000 + 259324 1 UIKit 0x336d777e +[UIView(UIViewAnimationWithBlocks) animateWithDuration:animations:] + 42 2 TestApp 0x000d04de 0x91000 + 259294 3 TestApp 0x000d0678 0x91000 + 259704 4 Foundation 0x355f04f8 __57-[NSNotificationCenter addObserver:selector:name:object:]_block_invoke_0 + 12 5 CoreFoundation 0x35aae540 ___CFXNotificationPost_block_invoke_0 + 64 6 CoreFoundation 0x35a3a090 _CFXNotificationPost + 1400 7 Foundation 0x355643e4 -[NSNotificationCenter postNotificationName:object:userInfo:] + 60 8 UIKit 0x33599112 -[UIInputViewTransition postNotificationsForTransitionStart] + 846 9 UIKit 0x335988cc -[UIPeripheralHost(UIKitInternal) executeTransition:] + 880 10 UIKit 0x3351bb8c -[UIPeripheralHost(UIKitInternal) setInputViews:animationStyle:] + 304 11 UIKit 0x3351b260 -[UIPeripheralHost(UIKitInternal) _reloadInputViewsForResponder:] + 952 12 UIKit 0x3351ae54 -[UIResponder(UIResponderInputViewAdditions) reloadInputViews] + 160 13 UIKit 0x3351a990 -[UIResponder becomeFirstResponder] + 452 14 UIKit 0x336194a0 -[UITextInteractionAssistant setFirstResponderIfNecessary] + 168 15 UIKit 0x33618d6a -[UITextInteractionAssistant oneFingerTap:] + 1602 16 UIKit 0x33618630 _UIGestureRecognizerSendActions + 100 17 UIKit 0x335a8d5e -[UIGestureRecognizer _updateGestureWithEvent:] + 298 18 UIKit 0x337d9472 ___UIGestureRecognizerUpdate_block_invoke_0541 + 42 19 UIKit 0x33524f4e _UIGestureRecognizerApplyBlocksToArray + 170 20 UIKit 0x33523a9c _UIGestureRecognizerUpdate + 892 21 UIKit 0x335307e2 _UIGestureRecognizerUpdateGesturesFromSendEvent + 22 22 UIKit 0x33530620 -[UIWindow _sendGesturesForEvent:] + 768 23 UIKit 0x335301ee -[UIWindow sendEvent:] + 82 24 UIKit 0x3351668e -[UIApplication sendEvent:] + 350 25 UIKit 0x33515f34 _UIApplicationHandleEvent + 5820 26 GraphicsServices 0x376d5224 PurpleEventCallback + 876 27 CoreFoundation 0x35ab651c __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 32 28 CoreFoundation 0x35ab64be __CFRunLoopDoSource1 + 134 29 CoreFoundation 0x35ab530c __CFRunLoopRun + 1364 30 CoreFoundation 0x35a3849e CFRunLoopRunSpecific + 294 31 CoreFoundation 0x35a38366 CFRunLoopRunInMode + 98 32 GraphicsServices 0x376d4432 GSEventRunModal + 130 33 UIKit 0x33544cce UIApplicationMain + 1074 Thread 0 crashed with ARM Thread State: r0: 0x0000004e r1: 0x000d04f8 r2: 0x338fed47 r3: 0x3f523340 r4: 0x00000000 r5: 0x2fe8da00 r6: 0x00000001 r7: 0x2fe8d9d0 r8: 0x3f54cad0 r9: 0x00000000 r10: 0x3fd00000 r11: 0x3f523310 ip: 0x3f497048 sp: 0x2fe8d988 lr: 0x33539a41 pc: 0x000d04fc cpsr: 0x60000010 To give some background info: The static library is part of an 'iOS fake-framework', built using the templates from here: https://github.com/kstenerud/iOS-Universal-Framework The framework presents a registration UI as a modal view on top of whatever the client application is doing at the time. It pushes these views using a handle to a UIViewController provided by the client application. It doesn't do anything special, but here's the animation code: -(void)keyboardWillShowNotification:(NSNotification *)notification { double animationDuration = [[[notification userInfo] objectForKey:UIKeyboardAnimationDurationUserInfoKey] doubleValue]; dispatch_async(dispatch_get_main_queue(), ^(void) { [self animateViewsToState:kUMAnimationStateKeyboardVisible forIdiom:[UIDevice currentDevice].userInterfaceIdiom forDuration:animationDuration]; }); } -(void)animateViewsToState:(kUMAnimationState)state forIdiom:(UIUserInterfaceIdiom)idiom forDuration:(double)duration { float fieldOffset; if (idiom == UIUserInterfaceIdiomPhone) { if (state == kUMAnimationStateKeyboardVisible) { fieldOffset = -KEYBOARD_HEIGHT_IPHONE_PORTRAIT; } else { fieldOffset = KEYBOARD_HEIGHT_IPHONE_PORTRAIT; } } else { if (state == kUMAnimationStateKeyboardVisible) { fieldOffset = -IPAD_FIELD_OFFSET; } else { fieldOffset = IPAD_FIELD_OFFSET; } } [UIView animateWithDuration:duration animations:^(void) { mUserNameField.frame = CGRectOffset(mUserNameField.frame, 0, fieldOffset); mUserPasswordField.frame = CGRectOffset(mUserPasswordField.frame, 0, fieldOffset); }]; } Further printf-style debugging shows that it crashes whenever I do anything much with UIKit - specifically, it crashes when I replace -animateViewsToState with: if (0 == UIUserInterfaceIdiomPhone) { NSLog(@""); } and [[[[UIAlertView alloc] initWithTitle:@"test" message:@"123" delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil] autorelease] show]; To me, this sounds like a linker problem, but I don't understand how such problems would only manifest here, and not beforehand. Any help would be greatly appreciated.

    Read the article

  • Zend Form, table decorators

    - by levacjeep
    Hello, I am having an incredibly difficult time to decorate a Zend form the way I need to. This is the HTML structure I am in need of: <table> <thead><tr><th>one</th><th>two</th><th>three</th><th>four</th></thead> <tbody> <tr> <td><input type='checkbox' id='something'/></td> <td><img src='src'/></td> <td><input type='text' id='something'/></td> <td><input type='radio' group='justonegroup'/></td> </tr> <tr> <td><input type='checkbox' id='something'/></td> <td><img src='src'/></td> <td><input type='text' id='something'/></td> <td><input type='radio' group='justonegroup'/></td> </tr> </tbody> </table> The number of rows in the body is determined by my looping structure inside my form class. All ids will be unique of course. All radio buttons in the form belongs to one group. My issue really is that I am unsure how to create and then style the object Zend_Form_Element_MultiCheckbox and Zend_Form_Element_Radio inside my table. Where/how would I apply the appropriate decoraters to the checkboxes and radio buttons to have a form structure like above? My Form class so far: class Form_ManageAlbums extends Zend_Form { public function __construct($album_id) { $photos = Model_DbTable_Photos::getAlbumPhotos($album_id); $selector = new Zend_Form_Element_MultiCheckbox('selector'); $radio = new Zend_Form_Element_Radio('group'); $options = array(); while($photo = $photos->fetchObject()) { $options[$photo->id] = ''; $image = new Zend_Form_Element_Image('image'.$photo->id); $image->setImageValue('/dog/upload/'.$photo->uid.'/photo/'.$photo->src); $caption = new Zend_Form_Element_Text('caption'.$photo->id); $caption->setValue($photo->caption); $this->addElements(array($image, $caption)); } $selector->addMultiOptions($options); $radio->addMultiOptions($options); $this->addElement($selector); $this->setDecorators(array( 'FormElements', array('HtmlTag', array('tag' => 'table')), 'Form' )); } } I have tried a few combination of decoraters for the td and tr, but no success to date. Thank you for any help, very appreciated. JP Levac

    Read the article

  • Server side Xforms form validation and integration into ASP.NET

    - by Nigel
    I have recently been investigating methods of creating web-based forms for an ASP.NET web application that can be edited and managed at runtime. For example an administrator might wish to add a new validation rule or a new set of fields. The holy grail would provide a means of specifying a form along with (potentially very complex) arbitrary validation rules, and allocation of data sources for each field. The specification would then be used to update the deployed form in the web application which would then validate submissions both on the client side and on the server side. My investigations led me to Xforms and a number of technologies that support it. One solution appears to be IBM Lotus Forms, but this requires a very large investment in terms of infrastructure, which makes it infeasible, although the forms designer may be useful as a stand-alone tool for creating the forms. I have also discounted browser plug-ins as the form must be publicly visible and cross-browser compliant. I have noticed that there are numerous javascript libraries that provide client side implementations given an Xforms schema. These would provide a partial solution but server side validation is still a requirement. Another option seems to involve the use of server side solutions such as the Java application Orbeon. Orbeon provides a tool for specifying the forms (although not as rich as Lotus Forms Designer), but the most interesting point is that it can translate an XForms schema into an XHTML form complete with validation. The fact that it is written in Java is not a big problem if it is possible to integrate with the existing ASP.NET application. So my question is whether anyone has done this before. It sounds like a problem that should have been solved but is inherently very complex. It seems possible to use an off-the-shelf tool to design the form and export it to an Xforms schema and xhtml form, and it seems possible to take that xforms schema and form and publish it using a client side library. What seems to be difficult is providing a means of validating the form submission on the server side and integrating the process nicely with .NET (although it seems the .NET community doesn't involve themselves with XForms; please correct me if I'm wrong on this count). I would be more than happy if a product provided something simple like a web service that could validate a submission against a schema. Maybe Orbeon does this but I'd be grateful if somebody in the know could point me in the right direction before I research it further. Many thanks.

    Read the article

  • Validate a string in a table in SQL Server - CLR function or T-SQL

    - by Ashish Gupta
    I need to check If a column value (string) in SQL server table starts with a small letter and can only contain '_', '-', numbers and alphabets. I know I can use a SQL server CLR function for that. However, I am trying to implement that validation using a scalar UDF and could make very little here...I can use 'NOT LIKE', but I am not sure how to make sure I validate the string irrespective of the order of characters or in other words write a pattern in SQL for this. Am I better off using a SQL CLR function? Any help will be appreciated.. Thanks in advance Thank you everyone for their comments. This morning, I chose to go CLR function way. For the purpose of what I was trying to achieve, I created one CLR function which does the validation of an input string and have that called from a SQL UDF and It works well. Just to measure the performance of t-SQL UDF using SQL CLR function vs t- SQL UDF, I created a SQL CLR function which will just check if the input string contains only small letters, it should return true else false and have that called from a UDF (IsLowerCaseCLR). After that I also created a regular t-SQL UDF(IsLowerCaseTSQL) which does the same thing using the 'NOT LIKE'. Then I created a table (Person) with columns Name(varchar) and IsValid(bit) columns and populate that with names to test. Data :- 1000 records with 'Ashish' as value for Name column 1000 records with 'ashish' as value for Name column then I ran the following :- UPDATE Person Set IsValid=1 WHERE dbo.IsLowerCaseTSQL (Name) Above updated 1000 records (with Isvalid=1) and took less than a second. I deleted all the data in the table and repopulated the same with same data. Then updated the same table using Sql CLR UDF (with Isvalid=1) and this took 3 seconds! If update happens for 5000 records, regular UDF takes 0 seconds compared to CLR UDF which takes 16 seconds! I am very less knowledgeable on t-SQL regular expression or I could have tested my actual more complex validation criteria. But I just wanted to know, even I could have written that, would that have been faster than the SQL CLR function considering the example above. Are we using SQL CLR because we can implement we can implement lot richer logic which would have been difficult otherwise If we write in regular SQL. Sorry for this long post. I just want to know from the experts. Please feel free to ask if you could not understand anything here. Thank you again for your time.

    Read the article

  • Reflection - SetValue of array within class?

    - by Jaymz87
    OK, I've been working on something for a while now, using reflection to accomplish a lot of what I need to do, but I've hit a bit of a stumbling block... I'm trying to use reflection to populate the properties of an array of a child property... not sure that's clear, so it's probably best explained in code: Parent Class: Public Class parent Private _child As childObject() Public Property child As childObject() Get Return _child End Get Set(ByVal value As child()) _child = value End Set End Property End Class Child Class: Public Class childObject Private _name As String Public Property name As String Get Return _name End Get Set(ByVal value As String) _name = value End Set End Property Private _descr As String Public Property descr As String Get Return _descr End Get Set(ByVal value As String) _descr = value End Set End Property End Class So, using reflection, I'm trying to set the values of the array of child objects through the parent object... I've tried several methods... the following is pretty much what I've got at the moment (I've added sample data just to keep things simple): Dim Results(1) As String Results(0) = "1,2" Results(1) = "2,3" Dim parent As New parent Dim child As childObject() = New childObject() {} Dim PropInfo As PropertyInfo() = child.GetType().GetProperties() Dim i As Integer = 0 For Each res As String In Results Dim ResultSet As String() = res.Split(",") ReDim child(i) Dim j As Integer = 0 For Each PropItem As PropertyInfo In PropInfo PropItem.SetValue(child, ResultSet(j), Nothing) j += 1 Next i += 1 Next parent.child = child This fails miserably on PropItem.SetValue with ArgumentException: Property set method not found. Anyone have any ideas? @Jon :- Thanks, I think I've gotten a little further, by creating individual child objects, and then assigning them to an array... The issue is now trying to get that array assigned to the parent object (using reflection). It shouldn't be difficult, but I think the problem comes because I don't necessarily know the parent/child types. I'm using reflection to determine which parent/child is being passed in. The parent always has only one property, which is an array of the child object. When I try assigning the child array to the parent object, I get a invalid cast exception saying it can't convert Object[] to . EDIT: Basically, what I have now is: Dim PropChildInfo As PropertyInfo() = ResponseObject.GetType().GetProperties() For Each PropItem As PropertyInfo In PropChildInfo PropItem.SetValue(ResponseObject, ResponseChildren, Nothing) Next ResponseObject is an instance of the parent Class, and ResponseChildren is an array of the childObject Class. This fails with: Object of type 'System.Object[]' cannot be converted to type 'childObject[]'.

    Read the article

  • Is MVVM pointless?

    - by joebeazelman
    Is orthodox MVVM implementation pointless? I am creating a new application and I considered Windows Forms and WPF. I chose WPF because it's future-proof and offer lots of flexibility. There is less code and easier to make significant changes to your UI using XAML. Since the choice for WPF is obvious, I figured that I may as well go all the way by using MVVM as my application architecture since it offers blendability, separation concerns and unit testability. Theoretically, it seems beautiful like the holy grail of UI programming. This brief adventure; however, has turned into a real headache. As expected in practice, I’m finding that I’ve traded one problem for another. I tend to be an obsessive programmer in that I want to do things the right way so that I can get the right results and possibly become a better programmer. The MVVM pattern just flunked my test on productivity and has just turned into a big yucky hack! The clear case in point is adding support for a Modal dialog box. The correct way is to put up a dialog box and tie it to a view model. Getting this to work is difficult. In order to benefit from the MVVM pattern, you have to distribute code in several places throughout the layers of your application. You also have to use esoteric programming constructs like templates and lamba expressions. Stuff that makes you stare at the screen scratching your head. This makes maintenance and debugging a nightmare waiting to happen as I recently discovered. I had an about box working fine until I got an exception the second time I invoked it, saying that it couldn’t show the dialog box again once it is closed. I had to add an event handler for the close functionality to the dialog window, another one in the IDialogView implementation of it and finally another in the IDialogViewModel. I thought MVVM would save us from such extravagant hackery! There are several folks out there with competing solutions to this problem and they are all hacks and don’t provide a clean, easily reusable, elegant solution. Most of the MVVM toolkits gloss over dialogs and when they do address them, they are just alert boxes that don’t require custom interfaces or view models. I’m planning on giving up on the MVVM view pattern, at least its orthodox implementation of it. What do you think? Has it been worth the trouble for you if you had any? Am I just a incompetent programmer or does MVVM not what it's hyped up to be?

    Read the article

  • IE8 positioning, nightmare!

    - by Kyle Sevenoaks
    Well hi, guess what, I have an IE positioning issue! This is in 8, so god know what's going on in the other versions (checking later) Both the boxes call the same class, why is IE being so difficult? Here's how it's meant to look: And here's how it does look: CSS: (removed comments for ease of reading) div .roundbigboxkunde { background-image:url(../../upload/EW_kunde_info.png); background-position:top center; padding:10px; padding-top:10px; padding-bottom:20px; width:560px; height:1%; border-width:1px; border-color:#dddddd; border-radius:10px; -moz-border-radius:10px; -webkit-border-radius:10px; z-index:1; position:relative; overflow:hidden; } div .roundbigboxkundei { margin-top:10px; padding:10px; padding-top:10px; padding-bottom:10px; width:760px; height:1%; position:relative; overflow:hidden; And HTML: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <div class="roundbigboxkunde"> <div class="roundbigboxkundei"> <p id="nyk">&nbsp;</p> <div id="bg_box2"></div> <p class="required"> <label for="billing_firstName"><span class="label">Fornavn:</span></label> <fieldset class="error"><input name="billing_firstName" class="text" type="text" value="Kyle"/> <div class="errorText hidden"></div> </fieldset> </p> CONTENT CONTINUES </fieldset> Here is the page

    Read the article

  • C# Nested Property Accessing overloading OR Sequential Operator Overloading

    - by Tim
    Hey, I've been searching around for a solution to a tricky problem we're having with our code base. To start, our code resembles the following: class User { int id; int accountId; Account account { get { return Account.Get(accountId); } } } class Account { int accountId; OnlinePresence Presence { get { return OnlinePresence.Get(accountId); } } public static Account Get(int accountId) { // hits a database and gets back our object. } } class OnlinePresence { int accountId; bool isOnline; public static OnlinePresence Get(int accountId) { // hits a database and gets back our object. } } What we're often doing in our code is trying to access the account Presence of a user by doing var presence = user.Account.Presence; The problem with this is that this is actually making two requests to the database. One to get the Account object, and then one to get the Presence object. We could easily knock this down to one request if we did the following : var presence = UserPresence.Get(user.id); This works, but sort of requires developers to have an understanding of the UserPresence class/methods that would be nice to eliminate. I've thought of a couple of cool ways to be able to handle this problem, and was wondering if anyone knows if these are possible, if there are other ways of handling this, or if we just need to think more as we're coding and do the UserPresence.Get instead of using properties. Overload nested accessors. It would be cool if inside the User class I could write some sort of "extension" that would say "any time a User object's Account property's Presence object is being accessed, do this instead". Overload the . operator with knowledge of what comes after. If I could somehow overload the . operator only in situations where the object on the right is also being "dotted" it would be great. Both of these seem like things that could be handled at compile time, but perhaps I'm missing something (would reflection make this difficult?). Am I looking at things completely incorrectly? Is there a way of enforcing this that removes the burden from the user of the business logic? Thanks! Tim

    Read the article

  • DevExpress Reporting Session Issue

    - by LeeHull
    This is pretty complicated so I will explain what is going on. I have created an ASP.NET website for displaying images, the way we use our images is, we have a database table that contains the URL where the images are located, however we recently started moving them from the filesystem, and storing them directly into the database as binary, but since we don't want to break older applications, we currently store them both places till they are all updated. Alright, the website I'm working on, is just a reporting website to display images by a date range, I have created an HTTPHandler to display the image, depending if they exist in the database as binary, if the row is DBNull, I just grab the URL from the other table instead, I just read the image, convert it into a byte[] and write it to the response, so it is interpreted as an image. I have created a page to display the report using DevExpress Reporting, I just query the report, save the dataset into a session, and the report reads the session to bind the report, i also have a picturebox in the report bound to "Handler.ashx?id=ImageID", this is because I am not binding a URL to the image, since it is a byte[] Also since there could be a report with 10000+ images, I am reading the dataset from the session inside the handler and just pulling out the image from the passed in ImageID, I am doing this to prevent connecting to the database each and every row. Now the strange thing is, Report loads fine.. data is loading, I have paging on the devexpress report viewer, it is loading the images fine, however here is the issue. When I print or export, I am not getting any images, I am debugged and found out the Session.Count is 0 when it tries to print or export, which is strange since there is more than just the dataset in the session. I have also added IRequiresSessionState to allow sessions in the handler, but the session count is still 0, I have changing the Handler to an aspx page, same issue. Any ideas or suggestions on logic changes, I'm all ears.. this is a very difficult situation since I have to display the images from database as a string AND byte[] since one can be URL and other be the image bytes, but I also can't slam the database on connection calls each row either. I have also reported the issue to DevExpress and they are clueless as well, since they don't dispose the session in the exporting process.

    Read the article

  • Using QT to build a WYSIWYG Editor for a Custom Markup Language

    - by Aaron
    I'm new to QT, and am trying to figure out the best means of creating a WYSIWYG editor widget for a custom markup language that displays simple text, images, and links. I need to be able to propagate changes from the WYSIWYG editor to the custom markup representation. As a concrete example of the problem domain, imagine that the custom markup might have a "player" tag which contains a player name and a team name. The markup could look like this: Last week, <player id="1234"><name>Aaron Rodgers</name><team>Packers</team></player> threw a pass. This text would display in the editor as: Last week, Aaron Rodgers of the Packers threw a pass. The player name and the team name would be editable directly within the editor in standard WYSIWYG fashion, so that my users do not have to learn any markup. Also, when the player name is moused-over, a details pop-up will appear about that player, and similarly for the team. With that long introduction, I'm trying to figure out where to start with QT. It seems that the most logical option would be the Rich Text API using a QTextDocument. This approach seems less than ideal given the limitations of a QTextDocument: I can't figure out how to capture navigation events from clicking on links. Following links on click seems to only be enabled when the QTextEdit is readonly. Custom objects that implement QTextObjectInterface are ignored in copy-and-paste operations Any HTML-based markup that is passed to it as Rich Text is retranslated into a series of span tags and lots of other junk, making it extremely difficult to propagate changes from the editor back to the original custom markup. A second option appears to be QWebKit, which allows for live editing of HTML5 markup, so I could specify a two-way translation between the custom markup and HTML5. I'm not clear on how one would propagate changes from the editor back to the original markup in real-time without re-translating the entire document on every text change. The QWebKit solutions looks like awfully bulky to me (Learning WebKit along with QT) to what should be a relatively simple problem. I have also considered implementing the WYSIWYG with a custom class using native QT containers, labels, images, and other widgets manually. This seems like the most flexible approach, and the one most likely not to run into unresolvable problems. However, I'm pretty sure that implementing all the details of a normal text editor (selecting text, font changes, cut-and-paste support, undo/redo, dragging of objects, cursor placement, etc.) will be incredibly time consuming. So, finally, my question: are there any QT gurus out there with some advice on where to start with this sort of project? BTW, I am using QT because the application is a desktop application that needs platform independence.

    Read the article

  • Validate a string in a table in SQL Server - CLR function or T-SQL (Question updated)

    - by Ashish Gupta
    I need to check If a column value (string) in SQL server table starts with a small letter and can only contain '_', '-', numbers and alphabets. I know I can use a SQL server CLR function for that. However, I am trying to implement that validation using a scalar UDF and could make very little here...I can use 'NOT LIKE', but I am not sure how to make sure I validate the string irrespective of the order of characters or in other words write a pattern in SQL for this. Am I better off using a SQL CLR function? Any help will be appreciated.. Thanks in advance Thank you everyone for their comments. This morning, I chose to go CLR function way. For the purpose of what I was trying to achieve, I created one CLR function which does the validation of an input string and have that called from a SQL UDF and It works well. Just to measure the performance of t-SQL UDF using SQL CLR function vs t- SQL UDF, I created a SQL CLR function which will just check if the input string contains only small letters, it should return true else false and have that called from a UDF (IsLowerCaseCLR). After that I also created a regular t-SQL UDF(IsLowerCaseTSQL) which does the same thing using the 'NOT LIKE'. Then I created a table (Person) with columns Name(varchar) and IsValid(bit) columns and populate that with names to test. Data :- 1000 records with 'Ashish' as value for Name column 1000 records with 'ashish' as value for Name column then I ran the following :- UPDATE Person Set IsValid=1 WHERE dbo.IsLowerCaseTSQL (Name) Above updated 1000 records (with Isvalid=1) and took less than a second. I deleted all the data in the table and repopulated the same with same data. Then updated the same table using Sql CLR UDF (with Isvalid=1) and this took 3 seconds! If update happens for 5000 records, regular UDF takes 0 seconds compared to CLR UDF which takes 16 seconds! I am very less knowledgeable on t-SQL regular expression or I could have tested my actual more complex validation criteria. But I just wanted to know, even I could have written that, would that have been faster than the SQL CLR function considering the example above. Are we using SQL CLR because we can implement we can implement lot richer logic which would have been difficult otherwise If we write in regular SQL. Sorry for this long post. I just want to know from the experts. Please feel free to ask if you could not understand anything here. Thank you again for your time.

    Read the article

  • Calculate year for end date: PostgreSQL

    - by Dave Jarvis
    Background Users can pick dates as shown in the following screen shot: Any starting month/day and ending month/day combinations are valid, such as: Mar 22 to Jun 22 Dec 1 to Feb 28 The second combination is difficult (I call it the "tricky date scenario") because the year for the ending month/day is before the year for the starting month/day. That is to say, for the year 1900 (also shown selected in the screen shot above), the full dates would be: Dec 22, 1900 to Feb 28, 1901 Dec 22, 1901 to Feb 28, 1902 ... Dec 22, 2007 to Feb 28, 2008 Dec 22, 2008 to Feb 28, 2009 Problem Writing a SQL statement that selects values from a table with dates that fall between the start month/day and end month/day, regardless of how the start and end days are selected. In other words, this is a year wrapping problem. Inputs The query receives as parameters: Year1, Year2: The full range of years, independent of month/day combination. Month1, Day1: The starting day within the year to gather data. Month2, Day2: The ending day within the year (or the next year) to gather data. Previous Attempt Consider the following MySQL code (that worked): end_year = start_year + greatest( -1 * sign( datediff( date( concat_ws('-', year, end_month, end_day ) ), date( concat_ws('-', year, start_month, start_day ) ) ) ), 0 ) How it works, with respect to the tricky date scenario: Create two dates in the current year. The first date is Dec 22, 1900 and the second date is Feb 28, 1900. Count the difference, in days, between the two dates. If the result is negative, it means the year for the second date must be incremented by 1. In this case: Add 1 to the current year. Create a new end date: Feb 28, 1901. Check to see if the date range for the data falls between the start and calculated end date. If the result is positive, the dates have been provided in chronological order and nothing special needs to be done. This worked in MySQL because the difference in dates would be positive or negative. In PostgreSQL, the equivalent functionality always returns a positive number, regardless of their relative chronological order. Question How should the following (broken) code be rewritten for PostgreSQL to take into consideration the relative chronological order of the starting and ending month/day pairs (with respect to an annual temporal displacement)? SELECT m.amount FROM measurement m WHERE (extract(MONTH FROM m.taken) >= month1 AND extract(DAY FROM m.taken) >= day1) AND (extract(MONTH FROM m.taken) <= month2 AND extract(DAY FROM m.taken) <= day2) Any thoughts, comments, or questions? (The dates are pre-parsed into MM/DD format in PHP. My preference is for a pure PostgreSQL solution, but I am open to suggestions on what might make the problem simpler using PHP.) Versions PostgreSQL 8.4.4 and PHP 5.2.10

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >