Search Results

Search found 20049 results on 802 pages for 'virtual drive'.

Page 431/802 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • mod_wsgi -apache configuration file

    - by Kevin
    guys sorry I'm a newbie to this but I've been following the mod_wsgi configuration tutorial and it's very spotty. In my httpd.conf file I add the virtual host like so: 'Main' server configuration # The directives in this section set up the values used by the 'main' server, which responds to any requests that aren't handled by a definition. These values also provide defaults for any containers you may define later in the file. # All of these directives may appear inside containers, in which case these default settings will be overridden for the virtual host being defined. # ServerName wsgihost DocumentRoot "/Library/WebServer/Documents" <Directory "/Library/WebServer/Documents"> Order allow,deny Allow from all </Directory> WSGIScriptAlias /myapp /Users/KL/modwsgi/env/myapp.wsgi <Directory "/Users/KL/modwsgi/env"> <Files myapp.wsgi> Order allow,deny Allow from all </Files> </Directory> Now, when I also added in my local host the following: 127.0.1.1 wsgihost but I can't seem to connect. Am I doing something terribly wrong?

    Read the article

  • Setup Apache with IPv6

    - by mrz
    I have two virtual machine on my computer. I have Apache installed on one of them (would be referred to as "server" after this), and I have set the Apache to listen to an IPv6. And when I enter the IPv6 into web browser on the "server" I see my index.html file. so far so good ... I want to be able to open a web browser on the other virtual machine("client") and see the index.html. But, when I try entering the IPv6 of the "server" in a web browser on the "client" I get an Unable to establish a connection to the server error. I can ping6 "client" from "server" and vice verse. There is only one thing to mention. ifconfig of the "server" shows 3 different IPv6 which two of them are scoped Global and there is one Link scope IPv6. On the "client" there is only one Link scope IPv6 though. I only can ping the Link IPv6s. Pinging other IPv6s would result connect:Network is unreachable. And if I set Apache to listen to Link IPv6, The rcapache2 start will fail the job. Any thoughts on what I am probably missing/doing wrong?

    Read the article

  • Is it possible to recover from the Windows 8 refresh feature?

    - by Warren P
    The intention of the Windows 8 RTM (released version) Refresh feature is to restore the system to the way it was when I first installed. It didn't though. Almost everything that came in the start-screen (not a menu any more) is gone, not just third party apps I installed, but EVERYTHING other than the icon for internet explorer, and the icon for the store, and the desktop, were wiped. Out of the box Windows 8 had a pretty large list of things installed, and it seems that the Refresh feature wipes all of them out. Is it possible to really get the system back to a fresh install state, or should I just re-install from the DVD I made? (I have access to Windows 8 RTM, legally through the MS Action Pack subscription.) I suspect that if I create a new account, I might get a new desktop with a default set of icons, but I'm hoping it might be possible to do this without using a different login. The second problem with Windows 8 refresh is it seems to trash the ACL's on my C: drive, taking all permissions away for access to the C: drive, making nothing on it visible or readable to my primary and only login account. I believe it might be possible to undo the damage done by the "refresh" with some judicious use of icacls from the command prompt.

    Read the article

  • VMware ESX Linux Guest Customization

    - by andyh_ky
    Hello, I am interested in deploying several RHEL 4 Update 8 virtual machines for creation of a test environment. Here are the steps I am taking: In off hours, P2V/V2V the production machines and convert them to templates Deploy the virtual machines with a customization specification that changes hostname, IP address I am interested in how these processes are done and if there are any options for further customization. Are the machines brought on the network when they are powered on, before they are reconfigured? Is there a potential IP address conflict? Is there an option to run additional scripts which reside on the guest as a part of the reconfiguration? For example, restoring an Oracle Database. This is an option with Windows guests and sysprep, but I have been unable to locate anything showing a RHEL equivalent. I am dealing with a multi tier application. The main issue I am attempting to mitigate is that the application servers reference database servers by hostname and in tnsnames files. I am interested in scripting the reconfiguration of the application in the deployment so that the app/db servers are pointing to the test environment. I am OK with placing the 'cleanup' script on the source and executing it after the machine has been brought up. I am interested in the automation of the script's execution post clone/boot, as well as if there could be an IP address conflict. (cross posted to VMTN's ESX 4 community)

    Read the article

  • Can't start my phpMyAdmin

    - by vrynxzent
    i am creating my own portable server but i can't make it to run the phpMyAdmin, the mysql, php and apache is running except for phpMyAdmin. When i check Apache's error log, it states [Fri Nov 09 08:54:37 2012] [warn] pid file F:/Drive WebServer/Drive WebServer/bin/Debug/Apache2bak/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? PHP Warning: PHP Startup: Unable to load dynamic library './php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library './php_mysqli.dll' - The specified module could not be found.\r\n in Unknown on line 0 [Fri Nov 09 08:54:37 2012] [notice] Apache/2.0.64 (Win32) PHP/5.2.17 configured -- resuming normal operations [Fri Nov 09 08:54:37 2012] [notice] Server built: Oct 18 2010 01:36:23 [Fri Nov 09 08:54:37 2012] [notice] Parent: Created child process 6784 i manually assign the exact path for this F:/php/ext/php_mysql.dll PHP Warning: PHP Startup: Unable to load dynamic library 'F:/php/ext/php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 but still the same error. i set this option in php.ini extension_dir = "./" another error goes pops out It says libmysql.dll is missing. PHP Version : 5.2.17 Any help would be appreciated. ;)

    Read the article

  • Running Windows 7 physical disk virtualized under Linux

    - by CajunLuke
    I have an existing Windows 7 installation that I'd like to virtualize under Linux. Windows boots fine on Disk A, Linux boots fine on Disk B. (Both disks are SATA.) I can mount the Windows disk when in Linux. I've tried VirtualBox and VMWare Player and neither will allow me to boot from the other disk. VirtualBox doesn't seem to have the option to do so. VMWare Player has the option to have an IDE drive exposed to the virtual environment as a SCSI disk. I've tried that, but it throws the error "Cannot connect virtual device ide1:0 because no corresponding device is available on the host." I've verified that it's pointing to the correct hard drive. I'm willing to try other virtualization products, and I'm not averse to spending a little money to get this to work. I've seen this other question, and it's not a duplicate, as I haven't gotten that far yet. I'm also interested in solutions going the other way (Linux on Windows), but that'd be lagniappe. Gory Hardware Details: Lenovo T410, 2.4 GHz Core i5 (has virtualization extensions), 4GiB RAM, 2x 320 GiB SATA HDD, one in optical bay. Fedora 14 2.6.35.10-74.fc14.x86_64, Windows 7 32-bit.

    Read the article

  • SQL Server: Network pauses after installing cheap SATA card: Is there a solution?

    - by samsmith
    At the risk of being assigned to the "bad DBA" club... I did something desperate, and may have to undo it. Problem: After installing a low cost eSATA board, my SQL Server is intermittently unresponsive (seemingly when there is a lot of IO to the eSATA drive). Questions: 1) Is there a solution to the intermittent unresponsiveness that allows me to keep the eSATA in place? 2) Whether or not (1==true): What is a decent, low cost way to add 1-3 TB storage to SQL for non-critical SQL DBs? Detail: Our SAN is full, and expanding it is costly and will take a month. I have a pressing need to add 1-3 TB for some development DBs (e.g. not mission critical; data loss is OK). As a bandaid, I threw a $20 eSATA PCI board in the Dell 1950 server, and attached an external 2TB eSATA drive. This seemed to work fine, but I notice that our production SQL DBs, and even remote desktop, now experience network "pauses" that they never did before (with both SQL client apps and remote desktop throwing "networking problem" errors). This SQL Server has lots of memory, and runs an instance of SQL 2005 (where all line of business apps reside) and an instance SQL 2008 (for development db's). SQL Server RAM has been appropriately configured, and this setup has run great for years. The server is: Dell 1950 Win2003 x64 14GB RAM PERC controller, 2 mirrored hd's internal Dell SAN over gbit ethernet, dual homed 2 PCIx slots (1 used by NIC for SAN, 1 now in use for eSATA board) Thank you for suggestions!

    Read the article

  • Shrinking windows and recovery partitions on the samsung new series 9

    - by bobbaluba
    I just bought a samsung NP900X3C, and as I was going to install linux, I noticed the windows partitions and recovery partitions occupied a major portion of the disk. The disk is a 128 GB SSD, and I want to keep the windows partition in order to play some games once in a while, but the windows disk is already 45GB full (with no installed programs) and the recovery partition is 20GB. That leaves under 60 GB for linux, which is not optimal, since that is what I'm going to be using most of the time, and there would be no room for games on the windows partition. There are also two small partitions that I don't know what are doing, one 100mb at the start of the disk that I'm guessing is some kind of boot partition, and one 5GB, that is described as an OS/2 hidden C: drive What I'm wondering is: can i delete the recovery partition? What about the mystical 5gb partition? Here is what fdisk reports: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 128.0 GB, 128035676160 bytes 255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x83953ffc Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 198273023 99033088 7 HPFS/NTFS/exFAT /dev/sda3 198273024 207276031 4501504 84 OS/2 hidden C: drive /dev/sda4 207276032 250068991 21396480 27 Hidden NTFS WinRE

    Read the article

  • Lenovo Thinkpad T430 not booting from HDD if there is a USB modem connected

    - by user93353
    I have a T430 Levono Thinkpad running Win7. I use a ZTE USB modem (something like this) for my internet connection. I usually keep the modem plugged into the USB drive even when the laptop is shutdown or hibernating. This worked fine on my earlier laptops. But with the Lenovo, my laptop doesn't boot if the modem is in the USB drive. It shows the initial character based screen where it gives the Thinkpad message & BIOS details and then waits. If I pull out the modem, it goes ahead. I have disabled USB as a boot option in my BIOS settings, but even then this happens sometimes (but not all the times). Likewise while resuming from hibernation. The USB modem also has drivers & ISP connection client which getting installed the first time you use it on any machine. I have used multiple laptops (HP, DELL, Acer, Gateway) but never faced this problem before. I have friends who use other Thinkpad models but haven't faced this issue. Any resolutions, workarounds for this?

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • Multi-petabyte scale out storage solution [closed]

    - by Alex Yuriev
    Let's say that I have a need to have a single-name space scale to multi-petabyte object store with a file system-like wrapper. What is currently out there that supports the following: Single name space that can take 1B files. Support for multiple entry points using NFS At least node level replication ( preferably node and file level replication ) Online software upgrades No "magic sauce" on the storage layer The following has been evaluated: Gluster & Lustre - just ick - fundamental lack of understanding of why online upgrades are mandatory. OneFS - we have it. It is smelling more and more like it hides a dead body under the hood. Other than MapR and zfs am I missing anything? P.S. Oh yes, I keep forgetting that the forums are for people to discuss if 2TB drive actually stores 2TB info. May bad. Seriously though - how the heck can "meets the following requirements" can be considered a "debate"? P.P.S. I did not throw an idiotic insult - i pointed out that this is actually an interesting question compared to a conversation about storage capacity of a 2TB hard drive. It is not a question of what works better - it is a question that asks did I miss any of the products that currently exist which fit the criteria where criteria is clearly outline. I got one answer below which included something that I have not looked at in a long time which looks quite a bit grown up compared to the time I briefly look at it before.

    Read the article

  • oracle access on vmware fusion

    - by gaudi_br
    Hello, I'm running snow leopard and I'm doing some development that requires some network knowledge. I've installed vmware fusion 3.0 and I've set up a virtual machine with windows 2003 server. I need to mimic the exact configuration of another server in the network, so I really need to run the versions I'll be mentioning here. Besides, I set up two network configurations on the VM: one NAT config (so that I can have internet access) and one host-only config (because I need to use another server's mac adress and my local area network might have a problem with it) From the installation of windows 2003, I then installed oracle 10.2.0.1. During the installation I received a warning about the primary ip-address of the system being dhcp assigned, but I ignored it (maybe it was a mistake)... Now, from experience, unless the DHCP assigned address changes, I should be able to access the guest system's database from the host system, so I went to safari and tried to access the oracle em. As it turns out, because my computer is on a company network, the company's DNS doesn't know about the virtual machine, unless of course I switch to a bridged network config. However, I don't want to do that because I don't to mix up the domains. So I guess the question is, how can I define my own dns or router, or whatever it is that I need to define so that whenever I try the guest system's ip address form the host, it will use the vmnet1 or vmnet8 interface define by vmware and bypass the dns configuration of my local area network. I'd also like to know what to do incase I want to change ip addresses on the guest machine without having oracle go haywire (I've noticed a few folders on the structure which are specific for the very first IP Address)... Any help would be appreciated. Thanks in advance.

    Read the article

  • SSL and regular VHost on the same server [duplicate]

    - by Pascal Boutin
    This question already has an answer here: How to stop HTTPS requests for non-ssl-enabled virtual hosts from going to the first ssl-enabled virtualhost (Apache-SNI) 1 answer I have a server running Apache 2.4 on which run several virtual hosts. The problem I noticed is that if I try to access let's say https://example.com that have no SSL setuped, apache will automatically try to access the first VHost that has SSL activated (which is litteraly not the same site). How can we prevent this strange behaviour, or in other words, how to say to Apache to ignore SSL for a given site. Here's sample of what my .conf files look like : <VirtualHost foobar.com:80> DocumentRoot /somepath/foobar.com <Directory /somepath/foobar.com> Options -Indexes Require all granted DirectoryIndex index.php AllowOverride All </Directory> ServerName foobar.com ServerAlias www.foobar.com </VirtualHost> <VirtualHost test.example.com:443> DocumentRoot /somepath/ <Directory /somepath/> Options -Indexes Require all granted AllowOverride All </Directory> ServerName test.example.com SSLEngine on SSLCertificateFile [­...] SSLCertificateKeyFile [­...] SSLCertificateChainFile [­...] </VirtualHost> With this, if I try to access https://foobar.com chrome will show me a SSL error that mention that the server was identifying itself as test.example.org Thanks in advance !

    Read the article

  • moving from WinXP to WinServer in VmWare

    - by Alex
    I have a Vmware machine for.Net application testing. Current setup: Host OS: win7 Guest OS: Right now the guest OS is Win Xp Pro x64, which runs great with just 1 gigabyte of RAM and 10 gigs of disk space. * This part can be skipped * As I said, there was a program that I needed to test, but unfortunately, by default, Vmware installs crappy display drivers(called SVGA II) on XP machines and there is NO way to upgrade them! This resulted in my program's error (the program used SlimDX (DirectX wrapper) to do some stuff..). Eventually I found out that display drivers most certainly is the problem. For example, Windows 7 virtual machine uses SVGA 3D drivers and I have NO problems running my SlimDX-based program. Now, regarding Windows Server 2008! Apparently, WDDM driver is supported by WS2008, which means that I'll be able to install SVGA 3D and to test my DX apps. * end of skip * The questions are: Will WS2008 be as smooth with just 1 gig of RAM just like Win XP was? Will 10 gigs of HDD be enough? Or the server requires more? Will I be able to install .Net ver. 4 on WS2008? Are there any limitations that I need to be aware of as a .Net programmer? EDIT: I was hoping that WS2008 is XP-based, not Vista-vased/W7-based. In comparison, W7 virtual machine with 2 gigs of RAM and 2 proc cores nearly kills my Host OS. Whereas, WinXp runs extremely fast even with 1 core and 1 gig of RAM. That's the main reason why I want to try WS2008..

    Read the article

  • How can I extend / create a new partition from the following setup?

    - by Kiada
    I'm a little unsure what to do in this situation. When I try to create a new simple volume from the unallocated space I get an error because I already have 4 partitions. I have no option to extend either my C:\ primary partition or the E:\ logical drive. C:\ - Gaming Win7 install. D:\ - Storage Unallocated Space - Would somehow like to install OSX on a partition from this space. E:\ - Software Development Win7 install. I:\ - Ignore this. It's an external 1TB HDD. Do I have any options that do not involve formatting / losing information on either C:\ or E:\? Thank you. Link to visual disk partitioning setup image. Edit: A bit more information regarding partitions. Firstly, the image linked above is a screenshot of Windows 7 partitioning tool, easier to read than text I guess! H:\ System Reserved: 100MB NTFS C:\ 244 GB NTFS Healthy (Page File, Primary Partition) D:\ 294 GB NTFS Healthy (Primary Partition) E:\ 100 GB NTFS Healthy (Boot, Page File, Crash Dump, Logical Drive) Unallocated 292 GB Hope this helps :)

    Read the article

  • Setting up a server that routes local traffic through vpn, while still being able to access internet directly

    - by Kazuo
    The goal is to setup a local server that routes local traffic through an uncontrolled remote vpn service while still being able to access the internet directly (not tunneled via vpn) and provide services through that direct connection. It is supposed to look like this: http://i.stack.imgur.com/74dGC.png Note: There is another router with modem between the local server and the internet. What is the easiest (best?) way to get this network setup working? I'm planning to setup the connection between the local router and the local server with simple ip forwarding. The problem now is that all the server's traffic is routed through the vpn tunnel as soon as I connect the server's openvpn client to the remote service so there is no direct internet connection available. My first idea was to setup a virtual machine (lxc container or something) and run the vpn client and local networking stuff in the vm. So that the vm receives all the incoming traffic from the local router and tunnels it through the vpn. This, as far as I understand, should not affect the physical server's network connection and should allow it to provide services to the internet. Before I start trying to set this up (I don't have much experience in networking), is there any easier or better way to do this? I would be thankful for every suggestion. Edit: Let's say the interface connected to the internet is eth0 and the interface connected to the local router is eth1. Another idea would be to create a virtual interface eth0:0 and specifiy it as openvpn's local endpoint and then force any traffic coming from eth1 through eth0:0. I'm not sure how I would force the traffic through eth0:0, though (possibly by adding routes).

    Read the article

  • Corrupted Laptop - Transfer Data Without My Computer (explorer.exe) Using Command Line

    - by nicorellius
    I have a laptop (Toshiba Satellite) that is pretty messed up. The person this belongs too already replaced it with a new machine. My job is to transfer all her data from this machine to disk so that I can transfer it to her. She doesn't want to lose any data (understandably so). Any operation I attempted (i.e., double clicking on any folder icon, like My Documents, My Computer, etc) resulted din a complete crash. The only good news is that I can actually start and navigate around using the command line. Also, I can access the internet. I have a network, so if I can map the drives I can get this thing figured out (hopefully). Also, I tried a USB drive but I couldn't figure out how to access it from the command line. Two questions (I need to use the command line for these): How would I go about accessing the USB drive and how can I map the shared drives on my network so that I may cd to that directory for use of the copy command?

    Read the article

  • I'm having trouble getting my server to appear online.

    - by JMRboosties
    Total newb question I'm sure. First I had installed WAMP (http://www.wampserver.com), and I was able to access my pages from other computers in my router network, and the virtual device used to debug Android programs (the purpose of my having a server). This functionality failed, however, at some point over these past few days. While my own browser displays the pages just fine, other computers, my Android phone (on our room's wifi), and my virtual device are no longer able to connect to my pages. I had not made any changes in the settings. I uninstalled WAMP and installed EasyPHP. However, the problem was not resolved. I know this is rather vague, but does anyone here have an idea of what may have happened? I forwarded both port 80 (I know its the default HTTP port, I did it just to be safe), and now port 8888 which EasyPHP uses. I turned my firewall on my hosting computer off for good measure. I cannot access my pages from neither remote computers or computers using my router. Any ideas you may have on how to resolve this would be awesome, thanks a lot. And if you need anymore info please tell me.

    Read the article

  • Windows XP to remote server 2008 R2 shares - awful response times

    - by nick3216
    I have a network infrastructure of Windows XP clients (a mix of XP and 64-bit XP), that are accessing a network share on a Windows 2008 R2 server. Whenever users type the address of a folder into the address bar of Windows Explorer it's as snappy at determining the contents of the current folder and presenting them to you in the address bar as if you're working on a local drive. But if you open one of the subfolders users get the animated red torch and 'Searching for items...' dialog, typically for 45 seconds. Similarly when using the open folder dialog to try and select a subfolder on this share it takes, on average, 45 seconds for the dialog to expand each node and show the subfolders of each node. Also, while the Explorer instance accsesing the network share is running slowly users notice that the performance of all other Explorer windows suffers. So while Explorer is searching for files on the network share they can't switch to another task and navigate around their local drive using Explorer because it's now as slow as a dead dog at accessing anything. Are there any settings we can change which will improve the performance accessing network shares?

    Read the article

  • Windows 7 Upgrade Fail from Home Premium to Ultimate Professional

    - by Michael S-B
    I had a hard drive crash, which meant I had to install a new HDD in my Dell 64-Bit XPS 1350 (lovely computer). I had previously been running Windows 7 Ultimate Professional which I had upgraded from the OEM Win 7 Home Premium by means of a disk I purchased from my university. Using the Recovery disk from Dell I installed Windows 7 Home Premium successfully on the new hard drive, but when I have tried to upgrade via my disk to Ultimate it installs the whole thing, says its complete, but when I reboot, tells me: "This version of Windows could not be installed. Your previous version of Windows has been restored, and you can continue to use it." I've installed the drivers from Dell's driver disk, but still to no avail. I've also used Driver Robot to update all my drivers. I can't find a .dmp file anywhere under C:\$WINDOWS.~BT\Sources but I did find this file under C:\$WINDOWS.~BT\Sources\Panther. setupact.log https://www.dropbox.com/s/yzy7fhkxlzc235y/setupact.log If anyone could please advise me what I need to do to fix Windows so it will upgrade properly, I would greatly appreciate it.

    Read the article

  • Xml failing to deserialise

    - by Carnotaurus
    I call a method to get my pages [see GetPages(String xmlFullFilePath)]. The FromXElement method is supposed to deserialise the LitePropertyData elements to strongly type LitePropertyData objects. Instead it fails on the following line: return (T)xmlSerializer.Deserialize(memoryStream); and gives the following error: <LitePropertyData xmlns=''> was not expected. What am I doing wrong? I have included the methods that I call and the xml data: public static T FromXElement<T>(this XElement xElement) { using (var memoryStream = new MemoryStream(Encoding.ASCII.GetBytes(xElement.ToString()))) { var xmlSerializer = new XmlSerializer(typeof(T)); return (T)xmlSerializer.Deserialize(memoryStream); } } public static List<LitePageData> GetPages(String xmlFullFilePath) { XDocument document = XDocument.Load(xmlFullFilePath); List<LitePageData> results = (from record in document.Descendants("row") select new LitePageData { Guid = IsValid(record, "Guid") ? record.Element("Guid").Value : null, ParentID = IsValid(record, "ParentID") ? Convert.ToInt32(record.Element("ParentID").Value) : (Int32?)null, Created = Convert.ToDateTime(record.Element("Created").Value), Changed = Convert.ToDateTime(record.Element("Changed").Value), Name = record.Element("Name").Value, ID = Convert.ToInt32(record.Element("ID").Value), LitePageTypeID = IsValid(record, "ParentID") ? Convert.ToInt32(record.Element("ParentID").Value) : (Int32?)null, Html = record.Element("Html").Value, FriendlyName = record.Element("FriendlyName").Value, Properties = record.Element("Properties") != null ? record.Element("Properties").Element("LitePropertyData").FromXElement<List<LitePropertyData>>() : new List<LitePropertyData>() }).ToList(); return results; } Here is the xml: <?xml version="1.0" encoding="utf-8"?> <root> <rows> <row> <ID>1</ID> <ImageUrl></ImageUrl> <Html>Home page</Html> <Created>01-01-2012</Created> <Changed>01-01-2012</Changed> <Name>Home page</Name> <FriendlyName>home-page</FriendlyName> </row> <row xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Guid>edeaf468-f490-4271-bf4d-be145bc6a1fd</Guid> <ID>8</ID> <Name>Unused</Name> <ParentID>1</ParentID> <Created>2006-03-25T10:57:17</Created> <Changed>2012-07-17T12:24:30.0984747+01:00</Changed> <ChangedBy /> <LitePageTypeID xsi:nil="true" /> <Html> What is the purpose of this option? This option checks the current document for accessibility issues. It uses Bobby to provide details of whether the current web page conforms to W3C's WCAG criteria for web content accessibility. Issues with Bobby and Cynthia Bobby and Cynthia are free services that supposedly allow a user to expose web page accessibility barriers. It is something of a guide but perhaps a blunt instrument. I tested a few of the webpages that I have designed. Sure enough, my pages fall short and for good reason. I am not about to claim that Bobby and Cynthia are useless. Although it is useful and commendable tool, it project appears to be overly ambitious. Nevertheless, let me explain my issues with Bobby and Cynthia: First, certain W3C standards for designing web documents are often too strict and unworkable. For instance, in some versions W3C standards for HTML, certain tags should not include a particular attribute, whereas in others they are requisite if the document is to be ???well-formed???. The standard that a designer chooses is determined usually by the requirements specification document. This specifies which browsers and versions of those browsers that the web page is expected to correctly display. Forcing a hypertext document to conform strictly to a specific W3C standard for HTML is often no simple task. In the worst case, it cannot conform without losing some aesthetics or accessibility functionality. Second, the case of HTML documents is not an isolated case. Standards for XML, XSL, JavaScript, VBScript, are analogous. Therefore, you might imagine the problems when you begin to combine these languages and formats in an HTML document. Third, there is always more than one way to skin a cat. For example, Bobby and Cynthia may flag those IMG tags that do not contain a TITLE attribute. There might be good reason that a web developer chooses not to include the title attribute. The title attribute has a limited numbers of characters and does not support carriage returns. This is a major defect in the design of this tag. In fact, before the TITLE attribute was supported, there was the ALT attribute. Most browsers support both, yet they both perform a similar function. However, both attributes share the same deficiencies. In practice, there are instances where neither attribute would be used. Instead, for example, the developer would write some JavaScript or VBScript to circumvent these deficiencies. The concern is that Bobby and Cynthia would not notice this because it does not ???understand??? what the JavaScript does. </Html> <FriendlyName>unused</FriendlyName> <IsDeleted>false</IsDeleted> <Properties> <LitePropertyData> <Description>Image for the page</Description> <DisplayEditUI>true</DisplayEditUI> <OwnerTab>1</OwnerTab> <DisplayName>Image Url</DisplayName> <FieldOrder>1</FieldOrder> <IsRequired>false</IsRequired> <Name>ImageUrl</Name> <IsModified>false</IsModified> <ParentPageID>3</ParentPageID> <Type>String</Type> <Value xsi:type="xsd:string">smarter.jpg</Value> </LitePropertyData> <LitePropertyData> <Description>WebItemApplicationEnum</Description> <DisplayEditUI>true</DisplayEditUI> <OwnerTab>1</OwnerTab> <DisplayName>WebItemApplicationEnum</DisplayName> <FieldOrder>1</FieldOrder> <IsRequired>false</IsRequired> <Name>WebItemApplicationEnum</Name> <IsModified>false</IsModified> <ParentPageID>3</ParentPageID> <Type>Number</Type> <Value xsi:type="xsd:string">1</Value> </LitePropertyData> </Properties> <Seo> <Author>Phil Carney</Author> <Classification /> <Copyright>Carnotaurus</Copyright> <Description> What is the purpose of this option? This option checks the current document for accessibility issues. It uses Bobby to provide details of whether the current web page conforms to W3C's WCAG criteria for web content accessibility. Issues with Bobby and Cynthia Bobby and Cynthia are free services that supposedly allow a user to expose web page accessibility barriers. It is something of a guide but perhaps a blunt instrument. I tested a few of the webpages that I have designed. Sure enough, my pages fall short and for good reason. I am not about to claim that Bobby and Cynthia are useless. Although it is useful and commendable tool, it project appears to be overly ambitious. Nevertheless, let me explain my issues with Bobby and Cynthia: First, certain W3C standards for designing web documents are often too strict and unworkable. For instance, in some versions W3C standards for HTML, certain tags should not include a particular attribute, whereas in others they are requisite if the document is to be ???well-formed???. The standard that a designer chooses is determined usually by the requirements specification document. This specifies which browsers and versions of those browsers that the web page is expected to correctly display. Forcing a hypertext document to conform strictly to a specific W3C standard for HTML is often no simple task. In the worst case, it cannot conform without losing some aesthetics or accessibility functionality. Second, the case of HTML documents is not an isolated case. Standards for XML, XSL, JavaScript, VBScript, are analogous. Therefore, you might imagine the problems when you begin to combine these languages and formats in an HTML document. Third, there is always more than one way to skin a cat. For example, Bobby and Cynthia may flag those IMG tags that do not contain a TITLE attribute. There might be good reason that a web developer chooses not to include the title attribute. The title attribute has a limited numbers of characters and does not support carriage returns. This is a major defect in the design of this tag. In fact, before the TITLE attribute was supported, there was the ALT attribute. Most browsers support both, yet they both perform a similar function. However, both attributes share the same deficiencies. In practice, there are instances where neither attribute would be used. Instead, for example, the developer would write some JavaScript or VBScript to circumvent these deficiencies. The concern is that Bobby and Cynthia would not notice this because it does not ???understand??? what the JavaScript does. </Description> <Keywords>unused</Keywords> <Title>unused</Title> </Seo> </row> </rows> </root> EDIT Here are my entities: public class LitePropertyData { public virtual string Description { get; set; } public virtual bool DisplayEditUI { get; set; } public int OwnerTab { get; set; } public virtual string DisplayName { get; set; } public int FieldOrder { get; set; } public bool IsRequired { get; set; } public string Name { get; set; } public virtual bool IsModified { get; set; } public virtual int ParentPageID { get; set; } public LiteDataType Type { get; set; } public object Value { get; set; } } [Serializable] public class LitePageData { public String Guid { get; set; } public Int32 ID { get; set; } public String Name { get; set; } public Int32? ParentID { get; set; } public DateTime Created { get; set; } public String CreatedBy { get; set; } public DateTime Changed { get; set; } public String ChangedBy { get; set; } public Int32? LitePageTypeID { get; set; } public String Html { get; set; } public String FriendlyName { get; set; } public Boolean IsDeleted { get; set; } public List<LitePropertyData> Properties { get; set; } public LiteSeoPageData Seo { get; set; } /// <summary> /// Saves the specified XML full file path. /// </summary> /// <param name="xmlFullFilePath">The XML full file path.</param> public void Save(String xmlFullFilePath) { XDocument doc = XDocument.Load(xmlFullFilePath); XElement demoNode = this.ToXElement<LitePageData>(); demoNode.Name = "row"; doc.Descendants("rows").Single().Add(demoNode); doc.Save(xmlFullFilePath); } }

    Read the article

  • Why "menus" unit is finalized too early?

    - by Harriv
    I tested my application with FastMM and FullDebugMode turned on, since I had some shutdown problems. After solving bunch of my own problems FastMM started to complain about calling virtual method on a freed object in TPopupList. I tried to move the menus unit as early as possible in uses so that it would be finalized last, but it didn't help. Is this real problem, a bug in vcl or false alarm from FastMM? Here's the full report from FastMM: FastMM has detected an attempt to call a virtual method on a freed object. An access violation will now be raised in order to abort the current operation. Freed object class: TPopupList Virtual method: Offset +16 Virtual method address: 4714E4 The allocation number was: 220 The object was allocated by thread 0x1CC0, and the stack trace (return addresses) at the time was: 403216 [sys\system.pas][System][System.@GetMem][2654] 404A4F [sys\system.pas][System][System.TObject.NewInstance][8807] 404E16 [sys\system.pas][System][System.@ClassCreate][9472] 404A84 [sys\system.pas][System][System.TObject.Create][8822] 7F2602 [Menus.pas][Menus][Menus.Menus][4223] 40570F [sys\system.pas][System][System.InitUnits][11397] 405777 [sys\system.pas][System][System.@StartExe][11462] 40844F [SysInit.pas][SysInit][SysInit.@InitExe][663] 7F6368 [PCCSServer.dpr][PCCSServer][PCCSServer.PCCSServer][148] 7C90DCBA [ZwSetInformationThread] 7C817077 [Unknown function at RegisterWaitForInputIdle] The object was subsequently freed by thread 0x1CC0, and the stack trace (return addresses) at the time was: 403232 [sys\system.pas][System][System.@FreeMem][2699] 404A6D [sys\system.pas][System][System.TObject.FreeInstance][8813] 404E61 [sys\system.pas][System][System.@ClassDestroy][9513] 428D15 [common\Classes.pas][Classes][Classes.TList.Destroy][2914] 404AB3 [sys\system.pas][System][System.TObject.Free][8832] 472091 [Menus.pas][Menus][Menus.Finalization][4228] 4056A7 [sys\system.pas][System][System.FinalizeUnits][11256] 4056BF [sys\system.pas][System][System.FinalizeUnits][11261] 7C9032A8 [RtlConvertUlongToLargeInteger] 7C90327A [RtlConvertUlongToLargeInteger] 7C92AA0F [Unknown function at towlower] The current thread ID is 0x1CC0, and the stack trace (return addresses) leading to this error is: 4714B8 [Menus.pas][Menus][Menus.TPopupList.MainWndProc][3779] 435BB2 [common\Classes.pas][Classes][Classes.StdWndProc][11583] 7E418734 [Unknown function at GetDC] 7E418816 [Unknown function at GetDC] 7E428EA0 [Unknown function at DefWindowProcW] 7E428EEC [Unknown function at DefWindowProcW] 7C90E473 [KiUserCallbackDispatcher] 7E42B1A8 [DestroyWindow] 47CE31 [Controls.pas][Controls][Controls.TWinControl.DestroyWindowHandle][6857] 493BE4 [Forms.pas][Forms][Forms.TCustomForm.DestroyWindowHandle][4564] 4906D9 [Forms.pas][Forms][Forms.TCustomForm.Destroy][2929] Current memory dump of 256 bytes starting at pointer address 7FF9CFF0: 2C FE 82 00 80 80 80 80 80 80 80 80 80 80 80 80 80 80 80 80 C4 A3 2D 0C 00 00 00 00 B1 D0 F9 7F 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 C0 00 00 00 16 32 40 00 9D 5B 40 00 C8 5B 40 00 CE 82 40 00 3C 40 91 7C B0 B1 94 7C 0A 77 92 7C 84 77 92 7C 7C F0 96 7C 94 B3 94 7C 84 77 92 7C C0 1C 00 00 32 32 40 00 12 5B 40 00 EF 69 40 00 BA 20 47 00 A7 56 40 00 BF 56 40 00 A8 32 90 7C 7A 32 90 7C 0F AA 92 7C 0A 77 92 7C 84 77 92 7C C0 1C 00 00 0E 00 00 00 00 00 00 00 C7 35 65 59 2C FE 82 00 80 80 80 80 80 80 80 80 80 80 38 CA 9A A6 80 80 80 80 80 80 00 00 00 00 51 D1 F9 7F 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 C1 00 00 00 16 32 40 00 9D 5B 40 00 C8 5B 40 00 CE 82 40 00 3C 40 91 7C B0 B1 94 7C 0A 77 92 7C 84 77 92 7C 7C F0 96 7C 94 B3 94 7C 84 77 92 7C , þ ‚ . € € € € € € € € € € € € € € € € Ä £ - . . . . . ± Ð ù . . . . . . . . . . . . . . . . À . . . . 2 @ . [ @ . È [ @ . Î ‚ @ . < @ ‘ | ° ± ” | . w ’ | „ w ’ | | ð – | ” ³ ” | „ w ’ | À . . . 2 2 @ . . [ @ . ï i @ . º G . § V @ . ¿ V @ . ¨ 2 | z 2 | . ª ’ | . w ’ | „ w ’ | À . . . . . . . . . . . Ç 5 e Y , þ ‚ . € € € € € € € € € € 8 Ê š ¦ € € € € € € . . . . Q Ñ ù . . . . . . . . . . . . . . . . Á . . . . 2 @ . [ @ . È [ @ . Î ‚ @ . < @ ‘ | ° ± ” | . w ’ | „ w ’ | | ð – | ” ³ ” | „ w ’ | I'm using Delphi 2007 and FastMM 4.97.

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • A first look at ConfORM - Part 1

    - by thangchung
    All source codes for this post can be found at here.Have you ever heard of ConfORM is not? I have read it three months ago when I wrote an post about NHibernate and Autofac. At that time, this project really has just started and still in beta version, so I still do not really care much. But recently when reading a book by Jason Dentler NHibernate 3.0 Cookbook, I started to pay attention to it. Author have mentioned quite a lot of OSS in his book. And now again I have reviewed ConfORM once again. I have been involved in ConfORM development group on google and read some articles about it. Fabio Maulo spent a lot of work for the OSS, and I hope it will adapt a great way for NHibernate (because he contributed to NHibernate that). So what is ConfORM? It is stand for Configuration ORM, and it was trying to use a lot of heuristic model for identifying entities from C# code. Today, it's mostly Model First Driven development, so the first thing is to build the entity model. This is really important and we can see it is the heart of business software. Then we have to tell DB about the entity of this model. We often will use Inversion Engineering here, Database Schema is will create based on recently Entity Model. From now we will absolutely not interested in the DB again, only focus on the Entity Model.Fluent NHibenate really good, I liked this OSS. Sharp Architecture and has done so well in Fluent NHibernate integration with applications. A Multiple Database technical in Sharp Architecture is truly awesome. It can receive configuration, a connection string and a dll containing entity model, which would then create a SessionFactory, finally caching inside the computer memory. As the number of SessionFactory can be very large and will full of the memory, it has also devised a way of caching SessionFactory in the file. This post I hope this will not completely explain about and building a model of multiple databases. I just tried to mount a number of posts from the community and apply some of my knowledge to build a management model Session for ConfORM.As well as Fluent NHibernate, ConfORM also supported on the interface mapping, see this to understand it. So the first thing we will build the Entity Model for it, and here is what I will use the model for this article. A simple model for managing news and polls, it will be too easy for a number of people, but I hope not to bring complexity to this post.I will then have some code to build super type for the Entity Model. public interface IEntity<TId>    {        TId Id { get; set; }    } public abstract class EntityBase<TId> : IEntity<TId>    {        public virtual TId Id { get; set; }         public override bool Equals(object obj)        {            return Equals(obj as EntityBase<TId>);        }         private static bool IsTransient(EntityBase<TId> obj)        {            return obj != null &&            Equals(obj.Id, default(TId));        }         private Type GetUnproxiedType()        {            return GetType();        }         public virtual bool Equals(EntityBase<TId> other)        {            if (other == null)                return false;            if (ReferenceEquals(this, other))                return true;            if (!IsTransient(this) &&            !IsTransient(other) &&            Equals(Id, other.Id))            {                var otherType = other.GetUnproxiedType();                var thisType = GetUnproxiedType();                return thisType.IsAssignableFrom(otherType) ||                otherType.IsAssignableFrom(thisType);            }            return false;        }         public override int GetHashCode()        {            if (Equals(Id, default(TId)))                return base.GetHashCode();            return Id.GetHashCode();        }    } Database schema will be created as:The next step is to build the ConORM builder to create a NHibernate Configuration. Patrick have a excellent article about it at here. Contract of it below: public interface IConfigBuilder    {        Configuration BuildConfiguration(string connectionString, string sessionFactoryName);    } The idea here is that I will pass in a connection string and a set of the DLL containing the Entity Model and it makes me a NHibernate Configuration (shame that I stole this ideas of Sharp Architecture). And here is its code: public abstract class ConfORMConfigBuilder : RootObject, IConfigBuilder    {        private static IConfigurator _configurator;         protected IEnumerable<Type> DomainTypes;         private readonly IEnumerable<string> _assemblies;         protected ConfORMConfigBuilder(IEnumerable<string> assemblies)            : this(new Configurator(), assemblies)        {            _assemblies = assemblies;        }         protected ConfORMConfigBuilder(IConfigurator configurator, IEnumerable<string> assemblies)        {            _configurator = configurator;            _assemblies = assemblies;        }         public abstract void GetDatabaseIntegration(IDbIntegrationConfigurationProperties dBIntegration, string connectionString);         protected abstract HbmMapping GetMapping();         public Configuration BuildConfiguration(string connectionString, string sessionFactoryName)        {            Contract.Requires(!string.IsNullOrEmpty(connectionString), "ConnectionString is null or empty");            Contract.Requires(!string.IsNullOrEmpty(sessionFactoryName), "SessionFactory name is null or empty");            Contract.Requires(_configurator != null, "Configurator is null");             return CatchExceptionHelper.TryCatchFunction(                () =>                {                    DomainTypes = GetTypeOfEntities(_assemblies);                     if (DomainTypes == null)                        throw new Exception("Type of domains is null");                     var configure = new Configuration();                    configure.SessionFactoryName(sessionFactoryName);                     configure.Proxy(p => p.ProxyFactoryFactory<ProxyFactoryFactory>());                    configure.DataBaseIntegration(db => GetDatabaseIntegration(db, connectionString));                     if (_configurator.GetAppSettingString("IsCreateNewDatabase").ConvertToBoolean())                    {                        configure.SetProperty("hbm2ddl.auto", "create-drop");                    }                     configure.Properties.Add("default_schema", _configurator.GetAppSettingString("DefaultSchema"));                    configure.AddDeserializedMapping(GetMapping(),                                                     _configurator.GetAppSettingString("DocumentFileName"));                     SchemaMetadataUpdater.QuoteTableAndColumns(configure);                     return configure;                }, Logger);        }         protected IEnumerable<Type> GetTypeOfEntities(IEnumerable<string> assemblies)        {            var type = typeof(EntityBase<Guid>);            var domainTypes = new List<Type>();             foreach (var assembly in assemblies)            {                var realAssembly = Assembly.LoadFrom(assembly);                 if (realAssembly == null)                    throw new NullReferenceException();                 domainTypes.AddRange(realAssembly.GetTypes().Where(                    t =>                    {                        if (t.BaseType != null)                            return string.Compare(t.BaseType.FullName,                                          type.FullName) == 0;                        return false;                    }));            }             return domainTypes;        }    } I do not want to dependency on any RDBMS, so I made a builder as an abstract class, and so I will create a concrete instance for SQL Server 2008 as follows: public class SqlServerConfORMConfigBuilder : ConfORMConfigBuilder    {        public SqlServerConfORMConfigBuilder(IEnumerable<string> assemblies)            : base(assemblies)        {        }         public override void GetDatabaseIntegration(IDbIntegrationConfigurationProperties dBIntegration, string connectionString)        {            dBIntegration.Dialect<MsSql2008Dialect>();            dBIntegration.Driver<SqlClientDriver>();            dBIntegration.KeywordsAutoImport = Hbm2DDLKeyWords.AutoQuote;            dBIntegration.IsolationLevel = IsolationLevel.ReadCommitted;            dBIntegration.ConnectionString = connectionString;            dBIntegration.LogSqlInConsole = true;            dBIntegration.Timeout = 10;            dBIntegration.LogFormatedSql = true;            dBIntegration.HqlToSqlSubstitutions = "true 1, false 0, yes 'Y', no 'N'";        }         protected override HbmMapping GetMapping()        {            var orm = new ObjectRelationalMapper();             orm.Patterns.PoidStrategies.Add(new GuidPoidPattern());             var patternsAppliers = new CoolPatternsAppliersHolder(orm);            //patternsAppliers.Merge(new DatePropertyByNameApplier()).Merge(new MsSQL2008DateTimeApplier());            patternsAppliers.Merge(new ManyToOneColumnNamingApplier());            patternsAppliers.Merge(new OneToManyKeyColumnNamingApplier(orm));             var mapper = new Mapper(orm, patternsAppliers);             var entities = new List<Type>();             DomainDefinition(orm);            Customize(mapper);             entities.AddRange(DomainTypes);             return mapper.CompileMappingFor(entities);        }         private void DomainDefinition(IObjectRelationalMapper orm)        {            orm.TablePerClassHierarchy(new[] { typeof(EntityBase<Guid>) });            orm.TablePerClass(DomainTypes);             orm.OneToOne<News, Poll>();            orm.ManyToOne<Category, News>();             orm.Cascade<Category, News>(Cascade.All);            orm.Cascade<News, Poll>(Cascade.All);            orm.Cascade<User, Poll>(Cascade.All);        }         private static void Customize(Mapper mapper)        {            CustomizeRelations(mapper);            CustomizeTables(mapper);            CustomizeColumns(mapper);        }         private static void CustomizeRelations(Mapper mapper)        {        }         private static void CustomizeTables(Mapper mapper)        {        }         private static void CustomizeColumns(Mapper mapper)        {            mapper.Class<Category>(                cm =>                {                    cm.Property(x => x.Name, m => m.NotNullable(true));                    cm.Property(x => x.CreatedDate, m => m.NotNullable(true));                });             mapper.Class<News>(                cm =>                {                    cm.Property(x => x.Title, m => m.NotNullable(true));                    cm.Property(x => x.ShortDescription, m => m.NotNullable(true));                    cm.Property(x => x.Content, m => m.NotNullable(true));                });             mapper.Class<Poll>(                cm =>                {                    cm.Property(x => x.Value, m => m.NotNullable(true));                    cm.Property(x => x.VoteDate, m => m.NotNullable(true));                    cm.Property(x => x.WhoVote, m => m.NotNullable(true));                });             mapper.Class<User>(                cm =>                {                    cm.Property(x => x.UserName, m => m.NotNullable(true));                    cm.Property(x => x.Password, m => m.NotNullable(true));                });        }    } As you can see that we can do so many things in this class, such as custom entity relationships, custom binding on the columns, custom table name, ... Here I only made two so-Appliers for OneToMany and ManyToOne relationships, you can refer to it here public class ManyToOneColumnNamingApplier : IPatternApplier<PropertyPath, IManyToOneMapper>    {        #region IPatternApplier<PropertyPath,IManyToOneMapper> Members         public void Apply(PropertyPath subject, IManyToOneMapper applyTo)        {            applyTo.Column(subject.ToColumnName() + "Id");        }         #endregion         #region IPattern<PropertyPath> Members         public bool Match(PropertyPath subject)        {            return subject != null;        }         #endregion    } public class OneToManyKeyColumnNamingApplier : OneToManyPattern, IPatternApplier<PropertyPath, ICollectionPropertiesMapper>    {        public OneToManyKeyColumnNamingApplier(IDomainInspector domainInspector) : base(domainInspector) { }         #region Implementation of IPattern<PropertyPath>         public bool Match(PropertyPath subject)        {            return Match(subject.LocalMember);        }         #endregion Implementation of IPattern<PropertyPath>         #region Implementation of IPatternApplier<PropertyPath,ICollectionPropertiesMapper>         public void Apply(PropertyPath subject, ICollectionPropertiesMapper applyTo)        {            applyTo.Key(km => km.Column(GetKeyColumnName(subject)));        }         #endregion Implementation of IPatternApplier<PropertyPath,ICollectionPropertiesMapper>         protected virtual string GetKeyColumnName(PropertyPath subject)        {            Type propertyType = subject.LocalMember.GetPropertyOrFieldType();            Type childType = propertyType.DetermineCollectionElementType();            var entity = subject.GetContainerEntity(DomainInspector);            var parentPropertyInChild = childType.GetFirstPropertyOfType(entity);            var baseName = parentPropertyInChild == null ? subject.PreviousPath == null ? entity.Name : entity.Name + subject.PreviousPath : parentPropertyInChild.Name;            return GetKeyColumnName(baseName);        }         protected virtual string GetKeyColumnName(string baseName)        {            return string.Format("{0}Id", baseName);        }    } Everyone also can download the ConfORM source at google code and see example inside it. Next part I will write about multiple database factory. Hope you enjoy about it. happy coding and see you next part.

    Read the article

  • Postfix Submission port issue

    - by RevSpot
    I have setup postfix+mailman on my debian server and i have an issue with postfix submission port. My ISP blocks SMTP on port 25 to prevent *spams and i must to use submission port (587). I have uncomment the following line from master.cf (/etc/postfix/) but nothing happens. submission inet n - - - - smtpd This is my mail logs file when i try to invite a user to mailman list Nov 6 00:35:34 myhostname postfix/qmgr[1763]: C90BF1060D: from=<[email protected]>, size=1743, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: DF54B10608: from=<[email protected]>, size=488, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: 80F0D10609: from=<[email protected]>, size=483, nrcpt=1 (queue active) Nov 6 00:35:55 myhostname postfix/smtp[2269]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2270]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2271]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2269]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2270]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2271]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2269]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2270]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname4 postfix/smtp[2271]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2269]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2270]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2271]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: C90BF1060D: to=<[email protected]>, relay=none, delay=23711, delays=23606/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: DF54B10608: to=<[email protected]>, relay=none, delay=23882, delays=23777/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: 80F0D10609: to=<[email protected]>, relay=none, delay=23875, delays=23770/0.04/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) main.cf smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.mydomain.com, localhost.mydomain.com,localhost relayhost = relay_domains = $mydestination, mail.mydomain.com relay_recipient_maps = hash:/var/lib/mailman/data/virtual-mailman transport_maps = hash:/etc/postfix/transport mailman_destination_recipient_limit = 1 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all local_recipient_maps = master.cf smtp inet n - - - - smtpd submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user}

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >