Search Results

Search found 7586 results on 304 pages for 'header only'.

Page 269/304 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • dig @my-server-ip mydomain.com works from inside, not from outside?

    - by x4954
    My server has 2 ips: x.x.x.73 and x.x.x.248. I can access my site via these ips, using Web browser. {Now, from a CentOS machine (not my server), using terminal} If I: dig @x.x.x.73 mydomain.com dig @x.x.x.248 mydomain.com I get the result: Connection timed out; no server could be reached. Could somebody please tell me how to fix it? Thank you. More information: If I log in to my server using ssh and do: dig @x.x.x.73 mydomain.com dig @x.x.x.248 mydomain.com I can see my zone shown as expected: ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5_7.1 <<>> @x.x.x.73 mydomain.com ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12757 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;mydomain.com. IN A ;; ANSWER SECTION: mydomain.com. 38400 IN A x.x.x.73 mydomain.com. 38400 IN A x.x.x.248 ;; AUTHORITY SECTION: mydomain.com. 38400 IN NS ns2.mydomain.com. mydomain.com. 38400 IN NS ns1.mydomain.com. ;; ADDITIONAL SECTION: ns1.mydomain.com. 38400 IN A x.x.x.73 ns2.mydomain.com. 38400 IN A x.x.x.248 ;; Query time: 20 msec ;; SERVER: x.x.x.73#53(x.x.x.73) ;; WHEN: Sun Jan 15 11:46:30 2012 ;; MSG SIZE rcvd: 129 BIND version 9.3.6, Centos 5. Logging to my server using ssh, do inga "dig google.com" also shows expected results.

    Read the article

  • Ubuntu stopped recognizing my iPod

    - by flashnode
    Rythmbox on Ubuntu 10.10 used to recognize my 3rd gen Nano and transfer mp3s. Now I plug it in and Ubuntu doesn't pop-up that box that asks what you want to do anymore. It is only recognized if I reboot and the thing is plugged in. Here is the output to 'lsusb -v -s bus:device' Bus 001 Device 008: ID 05ac:1262 Apple, Inc. iPod Nano 3.Gen Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x05ac Apple, Inc. idProduct 0x1262 iPod Nano 3.Gen bcdDevice 0.01 iManufacturer 1 Apple Inc. iProduct 2 iPod iSerial 3 000A27001A670128 bNumConfigurations 2 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 149 bNumInterfaces 3 bConfigurationValue 2 iConfiguration 4 iPod USB Interface bmAttributes 0xc0 Self Powered MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 1 Control Device bInterfaceProtocol 0 iInterface 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 1 (HEADER) bcdADC 1.00 wTotalLength 30 bInCollection 1 baInterfaceNr( 0) 1 AudioControl Interface Descriptor: bLength 12 bDescriptorType 36 bDescriptorSubtype 2 (INPUT_TERMINAL) bTerminalID 1 wTerminalType 0x0201 Microphone bAssocTerminal 2 bNrChannels 2 wChannelConfig 0x0003 Left Front (L) Right Front (R) iChannelNames 0 iTerminal 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 3 (OUTPUT_TERMINAL) bTerminalID 2 wTerminalType 0x0101 USB Streaming bAssocTerminal 1 bSourceID 1 iTerminal 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 1 bNumEndpoints 1 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 AudioStreaming Interface Descriptor: bLength 7 bDescriptorType 36 bDescriptorSubtype 1 (AS_GENERAL) bTerminalLink 2 bDelay 1 frames wFormatTag 1 PCM AudioStreaming Interface Descriptor: bLength 35 bDescriptorType 36 bDescriptorSubtype 2 (FORMAT_TYPE) bFormatType 1 (FORMAT_TYPE_I) bNrChannels 2 bSubframeSize 2 bBitResolution 16 bSamFreqType 9 Discrete tSamFreq[ 0] 8000 tSamFreq[ 1] 11025 tSamFreq[ 2] 12000 tSamFreq[ 3] 16000 tSamFreq[ 4] 22050 tSamFreq[ 5] 24000 tSamFreq[ 6] 32000 tSamFreq[ 7] 44100 tSamFreq[ 8] 48000 Endpoint Descriptor: bLength 9 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c0 1x 192 bytes bInterval 4 bRefresh 0 bSynchAddress 0 AudioControl Endpoint Descriptor: bLength 7 bDescriptorType 37 bDescriptorSubtype 1 (EP_GENERAL) bmAttributes 0x01 Sampling Frequency bLockDelayUnits 0 Undefined wLockDelay 0 Undefined Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 2 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.01 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 208 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 1 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 2 Device Status: 0x0000 (Bus Powered) This ubuntu forum told me to check the automount settings under /apps/nautilus/preferences/media_automount_open in gconf-editor. And I did that. Any clues?

    Read the article

  • GMail detecting mail as spam

    - by Petru Toader
    I've been trying for a long time to get our company's mail server send mail that will get accepted by the GMail spam filter. I have managed making it work for Yahoo Mail and Hotmail, sadly GMail is still marking our mails as spam. I have configured DKIM, SPF, DMARC and verified our mail server IP address against blacklists. I also have pasted here the headers GMail gets when we send a mail. Delivered-To: [email protected] Received: by 10.42.215.6 with SMTP id hc6csp107427icb; Wed, 20 Aug 2014 07:34:26 -0700 (PDT) X-Received: by 10.194.100.34 with SMTP id ev2mr59101019wjb.76.1408545265402; Wed, 20 Aug 2014 07:34:25 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.phyramid.com (mail.phyramid.com. [178.157.82.23]) by mx.google.com with ESMTPS id dj10si4827754wib.79.2014.08.20.07.34.24 for <[email protected]> (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 20 Aug 2014 07:34:25 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 178.157.82.23 as permitted sender) client-ip=178.157.82.23; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 178.157.82.23 as permitted sender) [email protected]; dkim=pass header[email protected] Received: from localhost (localhost [127.0.0.1]) by mail.phyramid.com (Postfix) with ESMTP id ED2BB2017AC for <[email protected]>; Wed, 20 Aug 2014 17:33:23 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=phyramid.com; h= content-type:content-type:mime-version:x-mailer:subject:subject :message-id:to:from:from:date:date; s=dkim; t=1408545197; x= 1409409197; bh=e04RtoyF7G39lfCvA9LLhTz4nF64siZtN5IYmC18Xsc=; b=o +6mO8Uz4Uf1G4U2q6tKUiEy2N2n/5R2VtPPwIvBE5xzK/hEd2sDGMxVzQVgIDCsK Q0Xh+auPaQpxldQ+AEcL2XSZMrk/g0mJONjkpI19I5AwGIJCR1SVvxdecohTn9iR bCHzrGi2wAicfDBzOH6lUBNfh2thri79aubdCYc97U= X-Amavis-Modified: Mail body modified (using disclaimer) - mail.phyramid.com X-Virus-Scanned: Debian amavisd-new at mail.phyramid.com Received: from mail.phyramid.com ([127.0.0.1]) by localhost (mail.phyramid.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3JcgXZAXeFtX for <[email protected]>; Wed, 20 Aug 2014 17:33:17 +0300 (EEST) Received: from whiterock.local (unknown [109.98.21.30]) by mail.phyramid.com (Postfix) with ESMTPSA id 05CAE200280 for <[email protected]>; Wed, 20 Aug 2014 17:33:15 +0300 (EEST) Date: Wed, 20 Aug 2014 17:34:15 +0300 From: Company Mail <[email protected]> To: [email protected] Message-ID: <[email protected]> Subject: hey there! X-Mailer: Airmail (247) MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Content-Disposition: inline How was your summer? ---- Thanks a lot!

    Read the article

  • View a pdf with quick webview though apache proxy

    - by Musa
    I have a site(IIS) that is accessed via a proxy in apache(on an IBM i). This site serves PDFs which has quick web view and if I access a pdf directly from the IIS server the PDFs starts to display immediately but if I go through the proxy I have to wait until the entire pdf downloads before I can view it. In the apache config file I use ProxyPass /path/ http://xxx.xxx.xxx.xxx/ <LocationMatch "/path/"> Header set Cache-Control "no-cache" </LocationMatch> I tried adding SetEnv proxy-sendcl to LocationMatch directive this had no effect. The PDFs that view quickly makes a lot of partial requests This is the initial request and response headers GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip HTTP/1.1 200 OK Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 15330238 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET This is a partial request and response GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept: */* Referer: http://xxx.xxx.xxx.xxx/xxxx.PDF Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip Range: bytes=0-32767 HTTP/1.1 206 Partial Content Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 32768 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Range: bytes 0-32767/15330238 Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET These are the headers I get if I go through he proxy GET /path/xxx.PDF HTTP/1.1 Host: domain:xxxx Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 HTTP/1.1 200 OK Date: Mon, 25 Aug 2014 13:28:42 GMT Server: Microsoft-IIS/7.5 Content-Type: application/pdf Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes ETag: "b6262940bbecf1:0"-gzip X-Powered-By: ASP.NET Cache-Control: no-cache Expires: Thu, 24 Aug 2017 13:28:42 GMT Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=300, max=100 Connection: Keep-Alive Transfer-Encoding: chunked I'm guessing its because the proxy uses Transfer-Encoding: chunked but I'm not sure and wasn't able to turn it off to check. Browser Chrome 36.0.1985.143 m Using the native PDF viewer Any help to get the pdf quick web view through the proxy working would be appreciated.

    Read the article

  • Why is BIND giving me a SERVFAIL in this case? (Notes inside)

    - by imaginative
    Woke up this morning to a bunch of the following: root@foo:/etc/bind# dig @1.2.3.4 foo.example.com ; <<>> DiG 9.6.1-P2 <<>> @1.2.3.4 foo.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 36121 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;;foo.example.com. IN A ;; Query time: 0 msec ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Thu Apr 1 09:57:59 2010 ;; MSG SIZE rcvd: 31 Some background on the fictitious "1.2.3.4". It's a slave name server in my nameserver "farm". Technically I have ns1 (being the master) and ns2/ns3. Currently ns1/ns2 are down for maintenance, so I left ns3 at it serving live traffic. That's the point, DNS is supposed to be resilient. Now the odd part is, "1.2.3.4" was serving requests for example.com just fine for the last 4-5 days. This morning I get a phone call that it's non-responsive. After investigation I see the message you see above, SERVFAIL. I looked into the zone file and saw the following: example.com IN SOA ns1.example.com. hostmaster.mail.example.com. ( I wondered if at this point that the nameserver thought it was not authoritative over example.com and adjusted it to the following: example.com IN SOA ns3.example.com. hostmaster.mail.example.com. ( After that, it started responding again for all authoritative queries for example.com. I have no idea why. I thought these things were supposed to be normalized upon zone transfer from ns1 - ns3? Can someone please example why this happened and how to prevent it from happening in the future? I've never had a similar problem, and because I don't understand it well, I might be missing some critical information in this question. So please let me know if I can further add any detail to make things clearer as well. One more thing to note: I have other domains that I'm authoritative for that have their SOA still saying ns1.example.com. and not ns3.example.com. Those domains are serving requests just fine! Is it a matter of time before they stop also and I have to change SOA to ns3.example.com? Is this also only required because ns1 and ns2 are currently offline?

    Read the article

  • IIS6 Log time recording problems

    - by Hafthor
    On three separate occasions on two separate servers at nearly the same times, 6.9 hours seemingly went by without any data being written to the IIS logs, but, on closer inspection, it appears that it was all recorded all at once. Here's the facts as I know them: Windows Server 2003 R2 w/ IIS6 Logging using GMT, server local time GMT-7. Application was still operating and I have SQL data to prove that Time gaps appear in log file, not across two # headers appear at gap Load balancer pings every 30 seconds No caching Here's info on a particular case: an entry appears for 2009-09-21 18:09:27 then #headers the next entry is for 2009-09-22 01:21:54, and so are the next 1600 entries in this log file and 370 in the next log file. about half of the ~2000 entries on 2009-09-22 01:21:54 are load balancer pings (est. at 2/min for 6.9hrs = 828 pings) then entries are recorded as normal. I believe that these events may coincide with me deploying an ASP.NET application update into those machines. Here's some relevant content from the logs in question: ex090921.log line 3684 2009-09-21 17:54:40 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:11 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:42 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:13 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:45 GET /ping.aspx - 80 404 0 0 3733 122 0 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-21 18:04:37 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 3078 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 109 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 3828 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 ... continues until line 5449 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 <eof> ex090922.log #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-22 00:00:16 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 ... continues until line 367 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:30 GET /ping.aspx - 80 200 0 0 277 122 0 ... back to normal behavior Note the seemingly correct date/time written to the #header of the new log file. Also note that /ping.aspx returned 404 then switched to 200 just as the problem started. I rename the "I'm alive page" so the load balancer stops sending requests to the server while I'm working on it. What you see here is me renaming it back so the load balancer will use the server. So, this problem definitely coincides with me re-enabling the server. Any ideas?

    Read the article

  • Nginx https rewrite turns POST to GET

    - by x7311
    My proxy server runs on ip A and this is how people access my web service. The nginx configuration will redirect to a virtual machine on ip B. For the proxy server on IP A, I have this in my sites-available server { listen 443; ssl on; ssl_certificate nginx.pem; ssl_certificate_key nginx.key; client_max_body_size 200M; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } server { listen 80; rewrite ^(.*) https://$http_host$1 permanent; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The proxy_redirect was taken from how do I get nginx to forward HTTP POST requests via rewrite? Everything that hits the public IP will hit 443 because of the rewrite. Internally, we are forwarding to 80 on the virtual machine. But when I run a python script such as the one below to test our configuration import requests data = {'username': '....', 'password': '.....'} url = 'http://IP_A/api/service/signup' res = requests.post(url, data=data, verify=False) print res print res.json print res.status_code print res.headers I am getting a 405 Method Not Allowed. In nginx we found that when it hit the internal server, the internal nginx was getting a GET request, even though in the original header we did a POST (this was shown in the Python script). So it seems like rewrite has problem. Any idea how to fix this? When I commented out the rewrite, it hits 80 for sure, and it went through. Since rewrite was able to talk to our internal server, so rewrite itself has no issue. It's just the rewrite dropped POST to GET. Thank you! (This will also be asked on Nginx forum because this is a critical blocker...)

    Read the article

  • How to link specific ports to specific domains with Apache virtual hosts?

    - by theJoe
    We have a forward-facing linux box running Apache HTTP server that is acting as a reverse proxy for several back-end servers. The servers are accessed through specific domain names and ports and are set up as virtual hosts within Apache as such: Listen 8001 Listen 8002 <Virtualhost *:8001> ServerName service.one.mycompany.com ProxyPass / http://internal.one.mycompany.com:8001/ ProxyPassReverse / http://internal.one.mycompany.com:8001/ RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </Virtualhost> <Virtualhost *:8002> ServerName service.two.mycompany.com ProxyPass / http://internal.two.mycompany.com:8002/ ProxyPassReverse / http://internal.two.mycompany.com:8002/ RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </Virtualhost> The proxy server has only one IP address, and both domains are pointing to it. Accessing internal.one via service.one works fine, as does accessing internal.two via service.two. Now the problem is that Apache does not take the requesting domain into account when accessing the virtual hosts. What I mean is that both domains work for both ports: requests for service.one:8002 proxies to internal.two:8002, and requests for service.two:8001 proxies to internal.one:8001, where ideally both these requests should be denied. I can get around this by creating more virtual hosts that explicitly deny these requests: NameVirtualHost *:8001 NameVirtualHost *:8002 <Virtualhost *:8001> ServerName service.two.mycompany.com Redirect permanent / http://errorpage.mycompany.com/ </Virtualhost> <Virtualhost *:8002> ServerName service.one.mycompany.com Redirect permanent / http://errorpage.mycompany.com/ </Virtualhost> But this is not an ideal solution, since we plan to add more services to the proxy, and each new port would need to be explicitly denied on all the other domains, and each new domain would need to be explicitly denied on all ports it is not utilizing. As we add more services, the number of virtual hosts can get out of hand quickly. My question, then, is whether there is a better way? Can we explicitly tie specific ports to specific domains in a virtual host so that only that domain-port combination is processed, and all other combinations are not? Things I’ve tried: Adding NameVirtualHost *:8001, etc. without the additional virtual hosts. Setting ProxyRequests On and Off, as well as ProxyPreserveHost On and Off Adding the server name or IP address to the virtual host header, e.g. <VirtualHost service.one.mycompany.com:8001> Using the <proxy> directive inside the virtual host directive. Lots and lots of googling. The proxy server is running CentOS 6.2 64-bit, Apache HTTPD server 2.2.15. As mentioned, the proxy server has only one IP address, and all the domains we are using are pointing to it.

    Read the article

  • Convert apache rewrite rules to nginx

    - by Shiyu Sekam
    I want to migrate an Apache setup to Nginx, but I can't get the rewrite rules working in Nginx. I had a look on the official nginx documentation, but still some trouble converting it. http://nginx.org/en/docs/http/converting_rewrite_rules.html I've used http://winginx.com/en/htaccess to convert my rules, but this just works partly. The / part looks okay, the /library part as well, but the /public part doesn't work at all. Apache part: ServerAdmin webmaster@localhost DocumentRoot /srv/www/Web Order allow,deny Allow from all RewriteEngine On RewriteRule ^$ public/ [L] RewriteRule (.*) public/$1 [L] Order Deny,Allow Deny from all RewriteEngine On RewriteCond %{QUERY_STRING} ^pid=([0-9]*)$ RewriteRule ^places(.*)$ index.php?url=places/view/%1 [PT,L] # Extract search query in /search?q={query}&l={location} RewriteCond %{QUERY_STRING} ^q=(.*)&l=(.*)$ RewriteRule ^(.*)$ index.php?url=search/index/%1/%2 [PT,L] # Extract search query in /search?q={query} RewriteCond %{QUERY_STRING} ^q=(.*)$ RewriteRule ^(.*)$ index.php?url=search/index/%1 [PT,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule ^(.*)$ index.php?url=$1 [PT,L] Order deny,allow deny from all ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /var/run/php5-fpm.sock -pass-header Authorization CustomLog ${APACHE_LOG_DIR}/access.log combined Nginx config: server { #listen 80; ## listen for ipv4; this line is default and implied root /srv/www/Web; index index.html index.php; server_name localhost; location / { rewrite ^/$ /public/ break; rewrite ^(.*)$ /public/$1 break; } location /library { deny all; } location /public { if ($query_string ~ "^pid=([0-9]*)$"){ rewrite ^/places(.*)$ /index.php?url=places/view/%1 break; } if ($query_string ~ "^q=(.*)&l=(.*)$"){ rewrite ^(.*)$ /index.php?url=search/index/%1/%2 break; } if ($query_string ~ "^q=(.*)$"){ rewrite ^(.*)$ /index.php?url=search/index/%1 break; } if (!-e $request_filename){ rewrite ^(.*)$ /index.php?url=$1 break; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } I haven't written the original ruleset, so I've a hard time converting it. Would you mind giving me a hint how to do it easily or can you help me to convert it, please? I really want to switch over to php5-fpm and nginx :) Thanks

    Read the article

  • How to loop through all illustrator files in a folder (CS6)

    - by Julian
    I have written some JavaScript to save .ai files to two separate locations with different resolutions, one of them being cropped to a reduced size art board. (Courtesy of John Otterud / Articmill for the main part). There are other variables in the script that I am not using at present but I want to leave the functionality there for a later date/additional layers to export/other resolutions etc. I can't get it to loop through all files in a folder. I cannot find the script that works - or insert it at the right place. I can get as far a selecting the folder and I suppose creating an array but after that what next? This is the create array part of the script - // JavaScript Document //Set up vairaibles var destDoc, sourceDoc, sourceFolder, newLayer; // Select the source folder. sourceFolder = Folder.selectDialog('Select the folder with Illustrator files that you want to mere into one', '~'); destDoc = app.documents.add(); // If a valid folder is selected if (sourceFolder != null) { files = new Array(); // Get all files matching the pattern files = sourceFolder.getFiles(); I have inserted this at the beginning of the main script (probably where I am going wrong because I can select the folder but then nothing more) #target illustrator var docRef = app.activeDocument; with (docRef) { if (layers[i].name = 'HEADER') { layers[i].name = '#'+ activeDocument.name; save() } } // *** Export Layers as PNG files (in multiple resolutions) *** var subFolderName = "For_PLMA"; var subFolderTwoName = "For_VLP"; var saveInMultipleResolutions = true; // ... // Note: only use one character! var exportLayersStartingWith = "%"; var exportLayersWithArtboardClippingStartingWith = "#"; // ... var normalResolutionFileAppend = "_VLP"; var highResolutionFileAppend = "_PLMA"; // ... var normalResolutionScale = 100; var highResolutionScale = 200; var veryhighResolutionScale = 300; // *** Start of script *** var doc = app.activeDocument; // Make sure we have saved the document if (doc.path != "") { Then the rest of the export script runs on from there.

    Read the article

  • Dec 5th Links: ASP.NET, ASP.NET MVC, jQuery, Silverlight, Visual Studio

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series for another on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET ASP.NET Code Samples Collection: J.D. Meier has a great post that provides a detailed round-up of ASP.NET code samples and tutorials from a wide variety of sources.  Lots of useful pointers. Slash your ASP.NET compile/load time without any hard work: Nice article that details a bunch of optimizations you can make to speed up ASP.NET project load and compile times. You might also want to read my previous blog post on this topic here. 10 Essential Tools for Building ASP.NET Websites: Great article by Stephen Walther on 10 great (and free) tools that enable you to more easily build great ASP.NET Websites.  Highly recommended reading. Optimize Images using the ASP.NET Sprite and Image Optimization Framework: A nice article by 4GuysFromRolla that discusses how to use the open-source ASP.NET Sprite and Image Optimization Framework (one of the tools recommended by Stephen in the previous article).  You can use this to significantly improve the load-time of your pages on the client. Formatting Dates, Times and Numbers in ASP.NET: Scott Mitchell has a great article that discusses formatting dates, times and numbers in ASP.NET.  A very useful link to bookmark.  Also check out James Michael’s DateTime is Packed with Goodies blog post for other DateTime tips. Examining ASP.NET’s Membership, Roles and Profile APIs (Part 18): Everything you could possibly want to known about ASP.NET’s built-in Membership, Roles and Profile APIs must surely be in this tutorial series. Part 18 covers how to store additional user info with Membership. ASP.NET with jQuery An Introduction to jQuery Templates: Stephen Walther has written an outstanding introduction and tutorial on the new jQuery Template plugin that the ASP.NET team has contributed to the jQuery project. Composition with jQuery Templates and jQuery Templates, Composite Rendering, and Remote Loading: Dave Ward has written two nice posts that talk about composition scenarios with jQuery Templates and some cool scenarios you can enable with them. Using jQuery and ASP.NET to Build a News Ticker: Scott Mitchell has a nice tutorial that demonstrates how to build a dynamically updated “news ticker” style UI with ASP.NET and jQuery. Checking All Checkboxes in a GridView using jQuery: Scott Mitchell has a nice post that covers how to use jQuery to enable a checkbox within a GridView’s header to automatically check/uncheck all checkboxes contained within rows of it. Using jQuery to POST Form Data to an ASP.NET AJAX Web Service: Rick Strahl has a nice post that discusses how to capture form variables and post them to an ASP.NET AJAX Web Service (.asmx). ASP.NET MVC ASP.NET MVC Diagnostics Using NuGet: Phil Haack has a nice post that demonstrates how to easily install a diagnostics page (using NuGet) that can help identify and diagnose common configuration issues within your apps. ASP.NET MVC 3 JsonValueProviderFactory: James Hughes has a nice post that discusses how to take advantage of the new JsonValueProviderFactory support built into ASP.NET MVC 3.  This makes it easy to post JSON payloads to MVC action methods. Practical jQuery Mobile with ASP.NET MVC: James Hughes has another nice post that discusses how to use the new jQuery Mobile library with ASP.NET MVC to build great mobile web applications. Credit Card Validator for ASP.NET MVC 3: Benjii Me has a nice post that demonstrates how to build a [CreditCard] validator attribute that can be used to easily validate credit card numbers are in the correct format with ASP.NET MVC. Silverlight Silverlight FireStarter Keynote and Sessions: A great blog post from John Papa that contains pointers and descriptions of all the great Silverlight content we published last week at the Silverlight FireStarter.  You can watch all of the talks online.  More details on my keynote and Silverlight 5 announcements can be found here. 31 Days of Windows Phone 7: 31 great tutorials on how to build Windows Phone 7 applications (using Silverlight).  Silverlight for Windows Phone Toolkit Update: David Anson has a nice post that discusses some of the additional controls provided with the Silverlight for Windows Phone Toolkit. Visual Studio JavaScript Editor Extensions: A nice (and free) Visual Studio plugin built by the web tools team that significantly improves the JavaScript intellisense support within Visual Studio. HTML5 Intellisense for Visual Studio: Gil has a blog post that discusses a new extension my team has posted to the Visual Studio Extension Gallery that adds HTML5 schema support to Visual Studio 2008 and 2010. Team Build + Web Deployment + Web Deploy + VS 2010 = Goodness: Visual blogs about how to enable a continuous deployment system with VS 2010, TFS 2010 and the Microsoft Web Deploy framework.  Visual Studio 2010 Emacs Emulation Extension and VIM Emulation Extension: Check out these two extensions if you are fond of Emacs and VIM key bindings and want to enable them within Visual Studio 2010. Hope this helps, Scott

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • Computer Networks UNISA - Chap 12 &ndash; Networking Security

    - by MarkPearl
    After reading this section you should be able to Identify security risks in LANs and WANs and design security policies that minimize risks Explain how physical security contributes to network security Discuss hardware and design based security techniques Understand methods of encryption such as SSL and IPSec, that can secure data in storage and in transit Describe how popular authentication protocols such as RADIUS< TACACS,Kerberos, PAP, CHAP, and MS-CHAP function Use network operating system techniques to provide basic security Understand wireless security protocols such as WEP, WPA and 802.11i Security Audits Before spending time and money on network security, examine your networks security risks – rate and prioritize risks. Different organizations have different levels of network security requirements. Security Risks Not all security breaches result from a manipulation of network technology – there are human factors that can play a role as well. The following categories are areas of considerations… Risks associated with People Risks associated with Transmission and Hardware Risks associated with Protocols and Software Risks associated with Internet Access An effective security policy A security policy identifies your security goals, risks, levels of authority, designated security coordinator and team members, responsibilities for each team member, and responsibilities for each employee. In addition it specifies how to address security breaches. It should not state exactly which hardware, software, architecture, or protocols will be used to ensure security, nor how hardware or software will be installed and configured. A security policy must address an organizations specific risks. to understand your risks, you should conduct a security audit that identifies vulnerabilities and rates both the severity of each threat and its likelihood of occurring. Security Policy Content Security policy content should… Policies for each category of security Explain to users what they can and cannot do and how these measures protect the networks security Should define what confidential means to the organization Response Policy A security policy should provide for a planned response in the event of a security breach. The response policy should identify the members of a response team, all of whom should clearly understand the the security policy, risks, and measures in place. Some of the roles concerned could include… Dispatcher – the person on call who first notices the breach Manager – the person who coordinates the resources necessary to solve the problem Technical Support Specialist – the person who focuses on solving the problem Public relations specialist – the person who acts as the official spokesperson for the organization Physical Security An important element in network security is restricting physical access to its components. There are various techniques for this including locking doors, security people at access points etc. You should identify the following… Which rooms contain critical systems or data and must be secured Through what means might intruders gain access to these rooms How and to what extent are authorized personnel granted access to these rooms Are authentication methods such as ID cards easy to forge etc. Security in Network Design The optimal way to prevent external security breaches from affecting you LAN is not to connect your LAN to the outside world at all. The next best protection is to restrict access at every point where your LAN connects to the rest of the world. Router Access List – can be used to filter or decline access to a portion of a network for certain devices. Intrusion Detection and Prevention While denying someone access to a section of the network is good, it is better to be able to detect when an attempt has been made and notify security personnel. This can be done using IDS (intrusion detection system) software. One drawback of IDS software is it can detect false positives – i.e. an authorized person who has forgotten his password attempts to logon. Firewalls A firewall is a specialized device, or a computer installed with specialized software, that selectively filters or blocks traffic between networks. A firewall typically involves a combination of hardware and software and may reside between two interconnected private networks. The simplest form of a firewall is a packet filtering firewall, which is a router that examines the header of every packet of data it receives to determine whether that type of packet is authorized to continue to its destination or not. Firewalls can block traffic in and out of a LAN. NOS (Network Operating System) Security Regardless of the operating system, generally every network administrator can implement basic security by restricting what users are authorized to do on a network. Some of the restrictions include things related to Logons – place, time of day, total time logged in, etc Passwords – length, characters used, etc Encryption Encryption is the use of an algorithm to scramble data into a format that can be read only by reversing the algorithm. The purpose of encryption is to keep information private. Many forms of encryption exist and new ways of cracking encryption are continually being invented. The following are some categories of encryption… Key Encryption PGP (Pretty Good Privacy) SSL (Secure Sockets Layer) SSH (Secure Shell) SCP (Secure CoPy) SFTP (Secure File Transfer Protocol) IPSec (Internet Protocol Security) For a detailed explanation on each section refer to pages 596 to 604 of textbook Authentication Protocols Authentication protocols are the rules that computers follow to accomplish authentication. Several types exist and the following are some of the common authentication protocols… RADIUS and TACACS PAP (Password Authentication Protocol) CHAP and MS-CHAP EAP (Extensible Authentication Protocol) 802.1x (EAPoL) Kerberos Wireless Network Security Wireless transmissions are particularly susceptible to eavesdropping. The following are two wireless network security protocols WEP WPA

    Read the article

  • Enabling Http caching and compression in IIS 7 for asp.net websites

    - by anil.kasalanati
    Caching – There are 2 ways to set Http caching 1-      Use Max age property 2-      Expires header. Doing the changes via IIS Console – 1.       Select the website for which you want to enable caching and then select Http Responses in the features tab       2.       Select the Expires webcontent and on changing the After setting you can generate the max age property for the cache control    3.       Following is the screenshot of the headers   Then you can use some tool like fiddler and see 302 response coming from the server. Doing it web.config way – We can add static content section in the system.webserver section <system.webServer>   <staticContent>             <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365.00:00:00" />   </staticContent> Compression - By default static compression is enabled on IIS 7.0 but the only thing which falls under that category is CSS but this is not enough for most of the websites using lots of javascript.  If you just thought by enabling dynamic compression would fix this then you are wrong so please follow following steps –   In some machines the dynamic compression is not enabled and following are the steps to enable it – Open server manager Roles > Web Server (IIS) Role Services (scroll down) > Add Role Services Add desired role (Web Server > Performance > Dynamic Content Compression) Next, Install, Wait…Done!   ?  Roles > Web Server (IIS) ?  Role Services (scroll down) > Add Role Services     Add desired role (Web Server > Performance > Dynamic Content Compression)     Next, Install, Wait…Done!     Enable  - ?  Open server manager ?  Roles > Web Server (IIS) > Internet Information Services (IIS) Manager   Next pane: Sites > Default Web Site > Your Web Site Main pane: IIS > Compression         Then comes the custom configuration for encrypting javascript resources. The problem is that the compression in IIS 7 completely works on the mime types and by default there is a mismatch in the mime types Go to following location C:\Windows\System32\inetsrv\config Open applicationHost.config The mimemap is as follows  <mimeMap fileExtension=".js" mimeType="application/javascript" />   So the section in the staticTypes should be changed          <add mimeType="application/javascript" enabled="true" />     Doing the web.config way –   We can add following section in the system.webserver section <system.webServer> <urlCompression doDynamicCompression="false"  doStaticCompression="true"/> More Information/References – ·         http://weblogs.asp.net/owscott/archive/2009/02/22/iis-7-compression-good-bad-how-much.aspx ·         http://www.west-wind.com/weblog/posts/98538.aspx  

    Read the article

  • Comments on Comments

    - by Joe Mayo
    I almost tweeted a reply to Capar Kleijne's question about comments on Twitter, but realized that my opinion exceeded 140 characters. The following is based upon my experience with extremes and approaches that I find useful in code comments. There are a couple extremes that I've seen and reasons why people go the distance in each approach. The most common extreme is no comments in the code at all.  A few bad reasons why this happens is because a developer is in a hurry, sloppy, or is interested in job preservation. The unfortunate result is that the code is difficult to understand and hard to maintain. The drawbacks to no comments in code are a primary reason why teachers drill the need for commenting code into our heads.  This viewpoint assumes the lack of comments are bad because the code is bad, but there is another reason for not commenting that is gaining more popularity. I've heard/and read that code should be self documenting. Following this thought pattern, if code is well written with meaningful names, there should not be a reason for comments.  An addendum to this argument is that comments are often neglected and get out-of-date, but the code is what is kept up-to-date. Presumably, if code contained very good naming, it would be easy to maintain.  This is a noble perspective and I like the practice of meaningful naming of identifiers. However, I think it's also an extreme approach that doesn't cover important cases.  i.e. If an identifier is named badly (subjective differences in opinion) or not changed appropriately during maintenance, then the badly named identifier is no more useful than a stale comment. These were the two no-comment extremes, so let's look at the too many comments extreme. On a regular basis, I'll see cases where the code is over-commented; not nearly as often as the no-comment scenarios, but still prevalent.  These are examples of where every single line in the code is commented.  These comments make the code harder to read because they get in the way of the algorithm.  In most cases, the comments parrot what each line of code does.  If a developer understands the language, then most statements are immediately intuitive.  i.e. what use is it to say that I'm assigning foo to bar when it's clear what the code is doing. I think that over-commenting code is a waste of time that slows down initial development and maintenance.  Understandably, the developer's intentions are admirable because they've had it beaten into their heads that they must comment. However, I think it's an extreme and prefer a more moderate approach. I don't think the extremes do justice to code because each can make maintenance harder.  No comments on bad code is obviously a problem, but the other two extremes are subtle and require qualification to address properly. The problem I see with the code-as-documentation approach is that it doesn't lift the developer out of the algorithm to identify dependencies, intentions, and hacks. Any developer can read code and follow an algorithm, but they still need to know where it fits into the big picture of the application. Because of indirections with language features like interfaces, delegates, and virtual members, code can become complex.  Occasionally, it's useful to point out a nuance or reason why a piece of code is there. i.e. If you've building an app that communicates via HTTP, you'll have certain headers to include for the endpoint, and it could be useful to point out why the code for setting those header values is there and how they affect the application. An argument against this could be that you should extract that code into a separate method with a meaningful name to describe the scenario.  My problem with such an approach would be that your code base becomes even more difficult to navigate and work with because you have all of this extra code just to make the code more meaningful. My opinion is that a simple and well-stated comment stating the reasons and intention for the code is more natural and convenient to the initial developer and maintainer.  I just don't agree with the approach of going out of the way to avoid making a comment.  I'm also concerned that some developers would take this approach as an excuse to not comment their bad code. Another area where I like comments is on documentation comments.  Java has it and so does C# and VB.  It's convenient because we can build automated tools that extract these comments.  These extracted comments are often much better than no documentation at all.  The "go read the code" answer always doesn't fulfill the need for a quick summary of an API. To summarize, I think that the extremes of no comments and too many comments are less than desirable approaches. I prefer documentation comments to explain each class and member (API level) and code comments as necessary to supplement well-written code. Joe

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • Debugging Tips for Skinning

    - by Christian David Straub
    Another guest post by Jeanne Waldman.If you are developing a skin for your Fusion Application in JDeveloper you should know these tips.   1. Firebug is your friend 2. Uncompress the css style classes 3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away 4. View the generated CSS File   1. Firebug is your friend Install Firebug (http://getfirebug.com/layout) into Firefox and use it to view your rendered jspx page in the browser. You can select the HTML dom nodes on your page and you can see the css styles applied to each dom node.   2. Uncompress the css style classes By default the styleclasses that are rendered are compressed. You may see style classes like class="x10" and class="x2e". But in your skin css file you have styleclasses like: af|inputText::content or af|panelBox::header   It is easier for you to develop a skin and debug a skin with Firebug if you see the uncompressed styleclasses. To do this, a. open web.xml b. add   <context-param>     <param-name>org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION</param-name>     <param-value>true</param-value>   </context-param> c. save d. restart the server and re-run your page.   3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away   For performance sake the ADF Faces framework does not check if you skin .css file has changed on every render. But this is exactly what you want to happen when you are developing or debugging a skin. You want your changes to get noticed right away, without restarting the server.   To do this, a. open web.xml b. add   <context-param>     <description>If this parameter is true, there will be an automatic check of the modification date of your JSPs, and saved state will be discarded when JSP's change. It will also automatically check if your skinning css files have changed without you having to restart the server. This makes development easier, but adds overhead. For this reason this parameter should be set to false when your application is deployed.</description>     <param-name>org.apache.myfaces.trinidad.CHECK_FILE_MODIFICATION</param-name>     <param-value>false</param-value>   </context-param> c. save d. restart the server and re-run your page. e. from then on, you can change your skin's .css file, save it and refresh your page and you should see the changes right away   4. View the generated CSS File   There are different ways to view the generated CSS File which is your skin's css file merged in with all the skins it extends and processed and generated to the filesystem and linked to your generated html page. One way is to view it with Firebug. The problem with this approach is you might see something that is a little different than the actual css file because Firebug may do some extra processing. I like to view the generated css file by: Right click on your page in the browser 

    Read the article

  • Copying Columns from Grid to Clipboard in SQL Developer

    - by thatjeffsmith
    There are several ways to get data from a query or a table|view to the clipboard. You know the tried and true, copy and paste. But what if you only want one or more columns, not every column? There are several ways to do this, let’s see if we can’t identify all of them. Write your query to only include the data you want Obvious? Yes. Needed to be said? Definitely. The best tuning tip is to only ask for the data you need, only when you absolutely need it. But let’s look at a few more practical ways to do this. Hide the unwanted columns Mouse right click on an column header. In the context menu, select ‘Columns.’ Hide the columns you don’t want. Copy and paste. WYSIWYG Grids, Hide Columns and Filter Rows Mouse select the columns Obvious, but a bit painful. For a very large dataset, you’ll be holding down the Shift and PageDown buttons – but it works. Remember to use Ctrl+Shift+C to get the column headers with the data. Use the Export Wizard This used to be called ‘Unload’ – agreed, not a great name. So, we changed it. In a grid, right mouse click on the data, and on the context menu, select ‘Export…’ Select your format – I suggest ‘delimited’ or ‘fixed’ for copying data to the clipboard. You can export to the clipboard, yes you can! Click ‘Next.’ Click in the Columns dialog, and choose the columns you want copied. Trim the columns you don't want copied Click ‘Finish.’ Alt or Ctrl tab to your window or application of choice. And Paste! "FIRST_NAME" "LAST_NAME" "Donald" "OConnell" "Douglas" "Grant" "Jennifer" "Whalen" "Pat" "Fay" "Susan" "Mavris" "William" "Gietz" "Alexander" "Hunold" "Bruce" "Ernst" "David" "Austin" "Valli" "Pataballa" "Diana" "Lorentz" "Daniel" "Faviet" "John" "Chen" "Ismael" "Sciarra" "Jose Manuel" "Urman" "Luis" "Popp" "Alexander" "Khoo" "Shelli" "Baida" "Sigal" "Tobias" "Guy" "Himuro" "Karen" "Colmenares" "Matthew" "Weiss" "Adam" "Fripp" "Payam" "Kaufling" "Shanta" "Vollman" "Kevin" "Mourgos" "Julia" "Nayer" "Irene" "Mikkilineni" ... There’s probably at least 2 or 3 more ways, but… But, try these and let me know how we can improve things. I’ve already gotten a request to be able to include the SQL text used to populate the dataset on the the copy to clipboard, and it’s now on our to-do list

    Read the article

  • Installing gtk-config and/or fsv in Ubuntu 10.10

    - by Wayne Werner
    Hi, I'm trying to install the File System Visualizer (think "It's a UNIX System! I know this!" from Jurassic Park) on Ubuntu 10.10. I've got the .tar.gz downloaded, and extracted. However, when I ./configure, I get this output: loading cache ./config.cache checking for a BSD compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking whether make sets ${MAKE}... yes checking for working aclocal... found checking for working autoconf... found checking for working automake... found checking for working autoheader... found checking for working makeinfo... missing checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking how to run the C preprocessor... gcc -E checking for ranlib... ranlib checking for POSIXized ISC... no checking for dirent.h that defines DIR... yes checking for opendir in -ldir... no checking for ANSI C header files... yes checking whether time.h and sys/time.h may both be included... yes checking for strings.h... yes checking for sys/time.h... yes checking for unistd.h... yes checking for working const... yes checking for mode_t... yes checking for uid_t in sys/types.h... yes checking for pid_t... yes checking for size_t... yes checking for comparison_fn_t... yes checking for st_blocks in struct stat... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for working alloca.h... yes checking for alloca... yes checking for working fnmatch... yes checking for strftime... yes checking for getcwd... yes checking for gettimeofday... yes checking for mktime... yes checking for strcspn... yes checking for strdup... yes checking for strspn... yes checking for strtod... yes checking for strtoul... yes checking for scandir... yes checking for inline... inline checking for off_t... yes checking for unistd.h... (cached) yes checking for getpagesize... yes checking for working mmap... yes checking for argz.h... yes checking for limits.h... yes checking for locale.h... yes checking for nl_types.h... yes checking for malloc.h... yes checking for string.h... yes checking for unistd.h... (cached) yes checking for sys/param.h... yes checking for getcwd... (cached) yes checking for munmap... yes checking for putenv... yes checking for setenv... yes checking for setlocale... yes checking for strchr... yes checking for strcasecmp... yes checking for strdup... (cached) yes checking for __argz_count... yes checking for __argz_stringify... yes checking for __argz_next... yes checking for stpcpy... yes checking for LC_MESSAGES... yes checking whether NLS is requested... yes checking whether included gettext is requested... no checking for libintl.h... yes checking for gettext in libc... yes checking for msgfmt... /usr/bin/msgfmt checking for dcgettext... yes checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for gtk-config... no checking for GTK - version >= 1.2.1... no *** The gtk-config script installed by GTK could not be found *** If GTK was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GTK_CONFIG environment variable to the *** full path to gtk-config. configure: error: Cannot find proper GTK+ version Obviously it's looking for gtk-config. However, apparently it doesn't exist in the repos anymore. Then this post mentioned that gtkglarea solved their problem, as mentioned in this file. Of course that poster neatly forgets to mention exactly what and how gtkglarea solved their problem, and Google is mostly devoid of information on the problem. So I come here asking for help! I would like to install fsv, but it tells me gtk-config doesn't exist. How can I fix this problem in Ubuntu 10.10? Thanks!

    Read the article

  • jQuery Datatable in MVC &hellip; extended.

    - by Steve Clements
    There are a million plugins for jQuery and when a web forms developer like myself works in MVC making use of them is par-for-the-course!  MVC is the way now, web forms are but a memory!! Grids / tables are my focus at the moment.  I don’t want to get in to righting reems of css and html, but it’s not acceptable to simply dump a table on the screen, functionality like sorting, paging, fixed header and perhaps filtering are expected behaviour.  What isn’t always required though is the massive functionality like editing etc you get with many grid plugins out there. You potentially spend a long time getting everything hooked together when you just don’t need it. That is where the jQuery DataTable plugin comes in.  It doesn’t have editing “out of the box” (you can add other plugins as you require to achieve such functionality). What it does though is very nicely format a table (and integrate with jQuery UI) without needing to hook up and Async actions etc.  Take a look here… http://www.datatables.net I did in the first instance start looking at the Telerik MVC grid control – I’m a fan of Telerik controls and if you are developing an in-house of open source app you get the MVC stuff for free…nice!  Their grid however is far more than I require.  Note: Using Telerik MVC controls with your own jQuery and jQuery UI does come with some hurdles, mainly to do with the order in which all your jQuery is executing – I won’t cover that here though – mainly because I don’t have a clear answer on the best way to solve it! One nice thing about the dataTable above is how easy it is to extend http://www.datatables.net/examples/plug-ins/plugin_api.html and there are some nifty examples on the site already… I however have a requirement that wasn’t on the site … I need a grid at the bottom of the page that will size automatically to the bottom of the page and be scrollable if required within its own space i.e. everything above the grid didn’t scroll as well.  Now a CSS master may have a great solution to this … I’m not that master and so didn’t! The content above the grid can vary so any kind of fixed positioning is out. So I wrote a little extension for the DataTable, hooked that up to the document.ready event and window.resize event. Initialising my dataTable ( s )… $(document).ready(function () {   var dTable = $(".tdata").dataTable({ "bPaginate": false, "bLengthChange": false, "bFilter": true, "bSort": true, "bInfo": false, "bAutoWidth": true, "sScrollY": "400px" });   My extension to the API to give me the resizing….   // ********************************************************************** // jQuery dataTable API extension to resize grid and adjust column sizes // $.fn.dataTableExt.oApi.fnSetHeightToBottom = function (oSettings) { var id = oSettings.nTable.id; var dt = $("#" + id); var top = dt.position().top; var winHeight = $(document).height(); var remain = (winHeight - top) - 83; dt.parent().attr("style", "overflow-x: auto; overflow-y: auto; height: " + remain + "px;"); this.fnAdjustColumnSizing(); } This is very much is debug mode, so pretty verbose at the moment – I’ll tidy that up later! You can see the last call is a call to an existing method, as the columns are fixed and that normally involves so CSS voodoo, a call to adjust those sizes is required. Just above is the style that the dataTable gives the grid wrapper div, I got that from some firebug action and stick in my new height. The –83 is to give me the space at the bottom i require for fixed footer!   Finally I hook that up to the load and window resize.  I’m actually using jQuery UI tabs as well, so I’ve got that in the open event of the tabs.   $(document).ready(function () { var oTable; $("#tabs").tabs({ "show": function (event, ui) { oTable = $('div.dataTables_scrollBody>table.tdata', ui.panel).dataTable(); if (oTable.length > 0) { oTable.fnSetHeightToBottom(); } } }); $(window).bind("resize", function () { oTable.fnSetHeightToBottom(); }); }); And that all there is too it.  Testament to the wonders of jQuery and the immense community surrounding it – to which I am extremely grateful. I’ve also hooked up some custom column filtering on the grid – pretty normal stuff though – you can get what you need for that from their website.  I do hide the out of the box filter input as I wanted column specific, you need filtering turned on when initialising to get it to work and that input come with it!  Tip: fnFilter is the method you want.  With column index as a param – I used data tags to simply that one.

    Read the article

  • Sort Your Emails by Conversation in Outlook 2010

    - by Matthew Guay
    Do you prefer the way Gmail sorts your emails by conversation?  Here’s how you can use this handy feature in Outlook 2010 too. One exciting new feature in Outlook 2010 is the ability to sort and link your emails by conversation.  This makes it easier to know what has been discussed in emails, and helps you keep your inbox more tidy.  Some users don’t like their emails linked into conversations, and in the final release of Outlook 2010 it is turned off by default.  Since this is a new feature, new users may overlook it and never know it’s available.  Here’s how you can enable conversation view and keep your email conversations accessible and streamlined. Activate Conversation View By default, your inbox in Outlook 2010 will look much like it always has in Outlook…a list of individual emails. To view your emails by conversation, select the View tab and check the Show as Conversations box on the top left. Alternately, click on the Arrange By tab above your emails, and select Show as Conversations. Outlook will ask if you want to activate conversation view in only this folder or all folders.  Choose All folders to view all emails in Outlook in conversations. Outlook will now resort your inbox, linking emails in the same conversation together.  Individual emails that don’t belong to a conversation will look the same as before, while conversations will have a white triangle carrot on the top left of the message title.  Select the message to read the latest email in the conversation. Or, click the triangle to see all of the messages in the conversation.  Now you can select and read any one of them. Most email programs and services include the previous email in the body of an email when you reply.  Outlook 2010 can recognize these previous messages as well.  You can navigate between older and newer messages from popup Next and Previous buttons that appear when you hover over the older email’s header.  This works both in the standard Outlook preview pane and when you open an email in its own window.   Edit Conversation View Settings Back in the Outlook View tab, you can tweak your conversation view to work the way you want.  You can choose to have Outlook Always Expand Conversations, Show Senders Above the Subject, and to Use Classic Indented View.  By default, Outlook will show messages from other folders in the conversation, which is generally helpful; however, if you don’t like this, you can uncheck it here.  All of these settings will stay the same across all of your Outlook accounts. If you choose Indented View, it will show the title on the top and then an indented message entry underneath showing the name of the sender. The Show Senders Above the Subject view makes it more obvious who the email is from and who else is active in the conversation.  This is especially useful if you usually only email certain people about certain topics, making the subject lines less relevant. Or, if you decide you don’t care for conversation view, you can turn it off by unchecking the box in the View tab as above. Conclusion Although it may take new users some time to get used to, conversation view can be very helpful in keeping your inbox organized and letting important emails stay together.  If you’re a Gmail user syncing your email account with Outlook, you may find this useful as it makes Outlook 2010 work more like Gmail, even when offline. If you’d like to sync your Gmail account with Outlook 2010, check out our articles on syncing it with POP3 and IMAP. Similar Articles Productive Geek Tips Automatically Move Daily Emails to Specific Folders in OutlookQuickly Clean Your Inbox in Outlook 2003/2007Find Emails With Attachments with Outlook 2007’s Instant SearchAdd Your Gmail Account to Outlook 2010 using POPSchedule Auto Send & Receive in Microsoft Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox)

    Read the article

  • can't load big files to server with php [closed]

    - by yozhik
    Hi all! I can't load big files to server. The problem is in that file $_FILES["filename"]["tmp_name"] is empty if file a little more bigger then 2mb. I tried to change variables in php.ini upload_max_filesize = 700M post_max_size = 16M but not working to. Also tried to add this variables to my .httaccess file - but 500 error appears. Error code while uploading=1. UPLOAD_ERR_INI_SIZE Value: 1; The uploaded file exceeds the upload_max_filesize directive in php.ini. Here is my uppload.php page, please anwer what I doing wrong? Thanx! <?php if(strlen($_FILES["filename"]["name"])) { $folder = "uploads/"; echo $folder; $error = ""; if($_FILES["filename"]["size"] > 1024*700*1024) { $error .= "<b><p class=ErrorMessage>?????? ????? ????????? 5Mb</p></b><br>"; header("Location: upload.php?error=".$error, true, 303 ); } if(!file_exists($folder.="hh/")) { if(!mkdir($folder, 0700)) $error .= "<b><p class=ErrorMessage>Folder not created</p></b><br>"; } //echo "<br>".$_FILES["filename"]["tmp_name"]."<br>"; echo $folder.$_FILES["filename"]["name"]."<br>"; echo $_FILES["filename"]["error"]."<br>"; if(move_uploaded_file($_FILES["filename"]["tmp_name"], $folder.$_FILES["filename"]["name"])) { echo("???? ??????? ???????? <br>"); echo("?????????????? ?????: <br>"); echo("??? ?????: "); echo($_FILES["filename"]["name"]); echo("<br>?????? ?????: "); echo($_FILES["filename"]["size"]); echo("<br>??????? ??? ????????: "); echo($folder.=$_FILES["filename"]["name"]); echo("<br>??? ?????: "); echo($_FILES["filename"]["type"]); } else { $error .= "<b><p class=ErrorMessage>?????? ???????? ?????</p></b><br>"; } } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>???????? ??? ????????</title> </head> <body> <?php if(isset($_REQUEST["error"])) { echo $_REQUEST["error"]; } ?> <h2><p><b> ????? ??? ???????? ?????? </b></p></h2> <form action="upload.php" method="post" enctype="multipart/form-data"> <input type="file" name="filename" READONLY><br> <input name="Upload" type="submit" value="Upload"><br> </form> </body> </html>

    Read the article

  • top Tweets SOA Partner Community – May 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community BPMN2.0 Oracle notations poster from eaiesb http://wp.me/p10C8u-pu Torsten WinterbergLook out for new Oracle #BPM edition coming up soon: The Oracle BPM Standard edtion! Great news for easy entry, small licence fees. Yes! Danilo Schmiedel Had a great chat with customer yesterday about #OracleBPM. Next step will be a 5day event combining modeling and implementation @soacommunity Frank Nimphius Still reading "Oracle Business Process Management Suite 11g Handbook". Excellent resource for a non-SOA but ADF guy like me ;-) Oracle New webcast: Maximize #Oracle #WebLogic Server ROI with Oracle #Enterprise #Manager 12c on May 2 at 10 am PT. Register http://bit.ly/JFUrR9 OTNArchBeat@OTNArchBeat BPM in Financial Services Industry | Sanjeev Sharma http://bit.ly/HCCxui JDeveloper & ADF BPEL 11.1.1.6 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations http://dlvr.it/1V9SxR Oracle UPK & Tutor Collaborate Attendees: Visit the UPK demo pod, SIGS, and sessions: If you are attending Collaborate 2012 - Sun. http://bit.ly/J39z65 Heidi Buelow see #fmw track RT @demed: Are you going to #KSCOPE12 in San Antonio, June 24-28? http://kscope12.com/component/ seminar/seminarslist?topicsid=6 Use promo code Fusion for discount! Sabine Leitner #SIG #Middleware 15.05. Frankfurt #Oracle #DOAG Planung & Aufbau WebLogic Server #WLS http://bit.ly/HKsCWV @OracleWebLogic @soacommunity SOA Community MDS explorer by Red Samurai http://wp.me/p10C8u-pp Biemond &reg; Retrieve or set a HTTP header from Oracle BPEL: With Oracle SOA Suite 11g patch 12928372 you can finally retrie http://bit.ly/JejTHC Lucas Jellema Call for papers for UKOUG 2012 has opened: http://techandebs.ukoug.org /default.asp?p=9306 (deadline 1st of June) OTNArchBeat BPM API usage: List all BPM Processes for a user | Kavitha Srinivasan http://bit.ly/IJKVfj demed SOA, Cloud + Service Tech symposium (London, Sep 24-25) call for paper is open http://www.servicetechsymposium. com /call2012.php @techsymp #oraclesoa OracleBlogs Lessons learned configuring OER 11g Workflows http://ow.ly/1iMsKh OTNArchBeat Scripting WebLogic Admin Server Startup | Antony Reynolds http://bit.ly/IH5ciU orclateamsoa A-Team Blog #ateam: BPM API usage: List all BPM Processes for a user http://ow.ly/1iJADp Lucas Jellema Just blogged about our Live FMW Application Development show during OBUG 2012, next Tuesday 24th April in Maastricht: OracleBlogs OEG integration with OSB/OWSM - 11g http://ow.ly/1iKx7G SOA Community SOA Community Newsletter April 2012 http://wp.me/p10C8u-pl Frank DorstRT @whitehorsesnl: Whiteblog: BPM Process Spaces in Oracle Webcenter (Patch Set 5(http://bit.ly/Hxzh29) #soacommunity #bpm #oracle) David Shaffer The Advanced SOA suite training class next week in Redwood City is full! Learned a lot about accepting credit card payments. OTNArchBeat Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5 | Shub Lahiri http://bit.ly/IgI8GN SOA Community Oracle Fusion Middleware Innovation Awards 2012, Call for Nominations #ofmaward #soa #bpm #soacommunity OTNArchBeat Updated SOA Documents now available in ITSO Reference Library http://bit.ly/I3Y6Sg Oracle Middleware Data Integrator & SOA - why 2 products better than one for integration? Webcast: Apr 24 10 AM PT http://bit.ly/IzmtKR Andrejus Baranovskis Red Samurai MDS Cleaner V2.0 http://fb.me/FxLVz82w SOA Community “@rluttikhuizen: Chapter 4 of SOA Made Simple book "Classification of Services" ready for collegial review” can #soacommunity get a preview? Xavier Verhaeghe #Gartner figures are out: #Oracle top in App Server market share (43.1%) and Relational #Database, too (48.8%) in 2011 Sabine Leitner WLS12c, Exa*, IDM, EM12c, DB @ Private, Public, Hybrid #Cloud Event 26.04. FFM #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity Michel Schildmeijer@wlscommunity @MiddlewareMagic @OTNArchBeat @Oracle_Fusion Oracle WebLogic / SOA Suite 11g HACMP Cluster take-over http://lnkd.in/G78qMd Oracle Middleware Hear how ODI and SOA's unified approach are key to untangling your business. April 24 10AM PT http://bit.ly/IdcsUz #Oracle OTNArchBeat Using SAP Adapter with OSB 11g (PS3) | Shub Lahiri http://bit.ly/IswR9K SOA Community Integrating with Oracle Fusion Applications: Discovering Integration Artifacts https://blogs.oracle.com/governance /entry/integrating_with_oracle_fusion_ applications #soacommunity #oer #governance OracleBlogs Tuning B2B Server Engine Threads in SOA Suite 11g http://ow.ly/1iH5bx OracleBlogs Top Tweets SOA Partner Community April 2012 http://ow.ly/1iVHfA SOA Community Oracle SOA Suite 11g Database Growth Management http://wp.me/p10C8u-pi Sabine Leitner WLS12c,Exa*,IDM,EM12c, DB @ Private, Public, Hybrid #Cloud Event 24.04. München #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity SOA Community Testing Business Rules by Mark Nelson http://redstack.wordpress.com/2012/ 04/18/testing-business-rules/ #soacommunity #soa #rules #oracle SOA CommunityTop Tweets SOA Partner Community - April 2012 http://wp.me/p10C8u-pn OTNArchBeat Webcast: Untangle Your Business with Oracle Unified SOA and Data Integration - April 24 http://bit.ly/IQexqT OTNArchBeat"Do more with SOA Integration: Best of Packt" contributors include @gschmutz, @llaszews, many others http://amzn.to/HVWwYt ServiceTechSymposium Symposium agenda page coming together - page launched today with keynotes, sessions to be added shortly. http://www.servicetechsymposium.com /agenda2012.php SOA Community Shipping Specialization plaques - congratulation #Fujitsu - request yours https://soacommunity.wordpress. com/2011/02/23/who-are-the-soa-experts-specialization-recognized-by-customers/ #soacommunity #OPN http://pic.twitter.com/YMRm2ion ServiceTechSymposium call for Presentations Submission Deadline Moved Up to May 21, 2012. Send your presentations submissions ASAP! ServiceTechSymposium Symposium Keynote by Vicente Navarro, European Space Agency, added to agenda: "SOA & Service-Orientation at the European Space Agency" SOA Community Running a large #soa project? Make sure you read - Oracle SOA Suite 11g Database Growth Management #soacommunity #opn SOA Community List all BPM Processes for a user by Yogesh l #bpm #oracle #soacommunity  For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity, twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • How to build the Darling projrct on Ubuntu 13.10?

    - by mirror27
    The Darling project is an open source Darwin/OS X emulation layer for Linux. I downloaded the source code with git and tried to build it with cmake but it failed. The document says I need these packages: clang 3.1+ GCC 4.6+ (yes, you still need GCC for header files) libkqueue libbsd gnustep-base ("Foundation") gnustep-gui ("Cocoa") gnustep-corebase ("CoreFoundation") libobjc2 libudev openssl libasound libav libgc but I could not find them on apt or in software center. Also cmake showed this result: No build type selected, default to Debug This is a 64-bit build Building ObjC ABI 2 You have called ADD_LIBRARY for library Carbon without any source files. This typically indicates a problem with your CMakeLists.txt file You have called ADD_LIBRARY for library AppKit without any source files. This typically indicates a problem with your CMakeLists.txt file You have called ADD_LIBRARY for library auto without any source files. This typically indicates a problem with your CMakeLists.txt file CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: LIBGNUSTEPCOREBASE_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin LIBKQUEUE_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin LIBOBJC2_INCLUDE_DIR used as include directory in directory /home/mirror/work/darling/darling/src/motool used as include directory in directory /home/mirror/work/darling/darling/src/util used as include directory in directory /home/mirror/work/darling/darling/src/libmach-o used as include directory in directory /home/mirror/work/darling/darling/src/libdyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/dyld used as include directory in directory /home/mirror/work/darling/darling/src/libSystem used as include directory in directory /home/mirror/work/darling/darling/src/libltdl used as include directory in directory /home/mirror/work/darling/darling/src/Cocoa used as include directory in directory /home/mirror/work/darling/darling/src/libobjcdarwin used as include directory in directory /home/mirror/work/darling/darling/src/CoreFoundation used as include directory in directory /home/mirror/work/darling/darling/src/libncurses used as include directory in directory /home/mirror/work/darling/darling/src/CoreSecurity used as include directory in directory /home/mirror/work/darling/darling/src/CoreServices used as include directory in directory /home/mirror/work/darling/darling/src/ExceptionHandling used as include directory in directory /home/mirror/work/darling/darling/src/IOKit used as include directory in directory /home/mirror/work/darling/darling/src/Foundation used as include directory in directory /home/mirror/work/darling/darling/src/Carbon used as include directory in directory /home/mirror/work/darling/darling/src/CoreVideo used as include directory in directory /home/mirror/work/darling/darling/src/OpenGL used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/thin used as include directory in directory /home/mirror/work/darling/darling/src/libstdc++darwin Configuring incomplete, errors occurred! How can I build the Darling project?

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >