Search Results

Search found 14539 results on 582 pages for 'date conversion'.

Page 359/582 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • What do these messages in the Qmail maillog indicate?

    - by Griffo
    There seems to be an endless supply of messages in the Qmail maillog for a single address. Can anyone shed some light on why this might be and whether it is a problem? To me it looks like either spam or some sort of unhandled problem. It strikes me as unusual that the 'from=' field is blank. This is on a VPS using Plesk in case that's important. Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23593]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: [email protected] EDIT Here's a sample of one of the emails: Received: (qmail 5603 invoked for bounce); 29 Jun 2011 07:46:31 +0100 Date: 29 Jun 2011 07:46:31 +0100 From: [email protected] To: [email protected] Subject: failure notice Hi. This is the qmail-send program at vps-1001108-595.cp.blacknight.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 200.147.36.13 does not like recipient. Remote host said: 450 4.7.1 Client host rejected: cannot find your hostname, [78.153.208.195] Giving up on 200.147.36.13. I'm not going to try again; this message has been in the queue too long. --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 15585 invoked by uid 48); 22 Jun 2011 07:38:26 +0100 Date: 22 Jun 2011 07:38:26 +0100 Message-ID: <[email protected]> To: [email protected] Subject: Cadastre-se e Concorra ? um Carro! MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Cielo Fidelidade <[email protected]> <!DOCTYPE HTML> <html> ... <body text removed> <html> If I understand this correctly, this is saying that an email sent by my server, from address [email protected], could not be delivered. However, [email protected] is not a valid email address on my server, so how can email be sent from this address on my server? I have tested whether my server is acting as an open relay, and it isn't. So how else could this be happening? I am getting thousands of these every day. What can I do to prevent it?

    Read the article

  • Fix overscan in Linux with Intel graphics Vizio HDTV

    - by Padenton
    I am connecting my server to my HDTV so that I can conveniently display it there. My VIZIO HDTV cuts off all 4 edges. I already realize it is not optimal to be running a GUI on a server; this server will not have much external traffic so I prefer it for convenience. I have already spent countless hours searching for a fix, but all I could find required an ATI or NVIDIA graphics card, or didn’t work. In Windows, the Intel driver has a setting for underscan, though it seems only to be available by a glitch. Here’s my specs: Ubuntu Linux (Quantal 12.10) (Likely to switch to Arch) This is a home server computer, with KDE for managing(for now, at least) Graphics: Intel HD Graphics 4000 from Ivy Bridge Motherboard: ASRock Z77 Extreme4 CPU: Intel Core i5-3450 My monitors: Dell LCD monitor Vizio VX37L_HDTV10A 37" on HDMI input I have tried all of the following from both HDMI?HDMI and DVI?HDMI cables connected to the ports on my motherboard: Setting properties in xrandr Making sure drivers are all up to date Trying several different modes The TV was “cheap”; max resolution 1080i. I am able to get a 1920x1080 modeline, in both GNU/Linux and Windows, without difficulty. There is no setting in the menu to fix the overscan (I have tried all of them, I realize it’s not always called overscan). I have been in the service menu for the TV, which still does not contain an option to fix it. No aspect ratio settings, etc. The TV has a VGA connector but I am unsure if it would fix it, as I don’t have a VGA cable long enough, and am not sure it would get me the 1920x1080 resolution which I want. Using another resolution does not fix the problem. I tried custom modelines with the dimensions of my screen’s viewable area, but it wouldn’t let me use them. Ubuntu apparently doesn’t automatically generate an xorg.conf file for use. I read somewhere that modifying it may help solve it. I tried X -configure several times(with reboots, etc.) but it consistently gave the following error messages: In log file: … (WW) Falling back to old probe method for vesa Number of created screens does not match number of detected devices. Configuration failed. In output: … (++) Using config file: "/root/xorg.conf.new" (==) Using system config directory "/usr/share/X11/xorg.conf.d" Number of created screens does not match number of detected devices. Configuration failed. Server terminated with error (2). Closing log file. Tried using 'overscan' prop in xrandr: root@xxx:/home/xxx# xrandr --output HDMI1 --set overscan off X Error of failed request: BadName (named color or font does not exist) Major opcode of failed request: 140 (RANDR) Minor opcode of failed request: 11 (RRQueryOutputProperty) Serial number of failed request: 42 Current serial number in output stream: 42 'overscan on', 'underscan off', 'underscan on' were all also tried. Originally tried with Ubuntu 12.04, but failed and so updated to 12.10 when it was released. All software is up to date. I am not opposed to reinstalling my OS, likely will anyways (my preference being Arch).

    Read the article

  • Duplicate DNS Zones (Error 4515 in Event Log )

    - by Campo
    I am getting these two error in the DNS Event log (errors at end of question). I have confirmed I do have duplicate zones. I am wondering which ones to delete. The DomainDNSZone contains all of our DNS records but it does not have the _msdcs zone.... that is in the ForestDNSZone with the duplicates that are not in use. here is a picture of that 3 Questions. I understand the advantages of having DNS in the ForestDNSZone. so... Why is DNS using the DomainDNSZone and is that acceptable considering _msdcs... is in the ForestDNSZone? If so, should I just delete the DC=1.168.192.in-addr.arpa and DC=supernova.local from the ForestDNSZone? Or should I try to get those to be the ones in use? What are those steps? I understand how to delete. That is simple but if i must move zones some info would be appreaciated there. Just to confirm. from my understanding. I can delete the two duplicates in the ForestDNSZone and leave the _msdcs.supernova.local as thats required there. This will resolve the erros I see. Just fyi when I look in those folders from the ForestDNSZone they have just 2 and 1 entries respectively. So obviously not in use compared to the others. I am pretty sure I understand the steps to complete this. But if you would like to provide that info, bonus points! Event Type: Warning Event Source: DNS Event Category: None Event ID: 4515 Date: 1/4/2011 Time: 2:14:18 PM User: N/A Computer: STANLEY Description: The zone 1.168.192.in-addr.arpa was previously loaded from the directory partition DomainDnsZones.supernova.local but another copy of the zone has been found in directory partition ForestDnsZones.supernova.local. The DNS Server will ignore this new copy of the zone. Please resolve this conflict as soon as possible. If an administrator has moved this zone from one directory partition to another this may be a harmless transient condition. In this case, no action is necessary. The deletion of the original copy of the zone should soon replicate to this server. If there are two copies of this zone in two different directory partitions but this is not a transient caused by a zone move operation then one of these copies should be deleted as soon as possible to resolve this conflict. To change the replication scope of an application directory partition containing DNS zones and for more details on storing DNS zones in the application directory partitions, please see Help and Support. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 89 25 00 00 %.. AND Event Type: Warning Event Source: DNS Event Category: None Event ID: 4515 Date: 1/4/2011 Time: 2:14:18 PM User: N/A Computer: STANLEY Description: The zone supernova.local was previously loaded from the directory partition DomainDnsZones.supernova.local but another copy of the zone has been found in directory partition ForestDnsZones.supernova.local. The DNS Server will ignore this new copy of the zone. Please resolve this conflict as soon as possible. If an administrator has moved this zone from one directory partition to another this may be a harmless transient condition. In this case, no action is necessary. The deletion of the original copy of the zone should soon replicate to this server. If there are two copies of this zone in two different directory partitions but this is not a transient caused by a zone move operation then one of these copies should be deleted as soon as possible to resolve this conflict. To change the replication scope of an application directory partition containing DNS zones and for more details on storing DNS zones in the application directory partitions, please see Help and Support. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 89 25 00 00 %..

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so

    Read the article

  • why nginx rewrite post request from /login to //login?

    - by jiangchengwu
    There is a if statement, which will rewrite url when the client is Android. Everything ok. But, something got strange. Nginx will write post request /login to //login, even if the block of if statement is bank. So I got a 404 page. As the jetty server only accept /login request. Server conf: location / { proxy_pass http://localhost:8785/; proxy_set_header Host $http_host; proxy_set_header Remote-Addr $http_remote_addr; proxy_set_header X-Real-IP $remote_addr; if ( $http_user_agent ~ Android ){ # rewrite something, been commented } } Debug info, origin log https://gist.github.com/3799021 ... 2012/09/28 16:29:49 [debug] 26416#0: *1 http script regex: "Android" 2012/09/28 16:29:49 [notice] 26416#0: *1 "Android" matches "Android/1.0", client: 106.187.97.22, server: ireedr.com, request: "POST /login HTTP/1.1", host: "ireedr.com" ... 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "POST //login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... 2012/09/28 16:29:49 [debug] 26416#0: *1 HTTP/1.1 404 Not Found Server: nginx/1.2.1 Date: Fri, 28 Sep 2012 08:29:49 GMT Content-Type: text/html;charset=ISO-8859-1 Transfer-Encoding: chunked Connection: keep-alive Cache-Control: must-revalidate,no-cache,no-store Content-Encoding: gzip ... Only when I commented the block in the configration file: location / { proxy_pass http://localhost:8785/; proxy_set_header Host $http_host; proxy_set_header Remote-Addr $http_remote_addr; proxy_set_header X-Real-IP $remote_addr; #if ( $http_user_agent ~ Android ){ # #} } The client can get an 200 response. Debug info, origin log https://gist.github.com/3799023 ... "POST /login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... 2012/09/28 16:27:19 [debug] 26319#0: *1 HTTP/1.1 200 OK Server: nginx/1.2.1 Date: Fri, 28 Sep 2012 08:27:19 GMT Content-Type: application/json;charset=UTF-8 Content-Length: 17 Connection: keep-alive ... As the log: 2012/09/28 16:29:49 [notice] 26416#0: *1 "Android" matches "Android/1.0", client: 106.187.97.22, server: ireedr.com, request: "POST /login HTTP/1.1", host: "ireedr.com" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script if 2012/09/28 16:29:49 [debug] 26416#0: *1 post rewrite phase: 4 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 5 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 6 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 7 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 8 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 9 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 10 2012/09/28 16:29:49 [debug] 26416#0: *1 post access phase: 11 2012/09/28 16:29:49 [debug] 26416#0: *1 try files phase: 12 2012/09/28 16:29:49 [debug] 26416#0: *1 posix_memalign: 0000000001E798F0:4096 @16 2012/09/28 16:29:49 [debug] 26416#0: *1 http init upstream, client timer: 0 2012/09/28 16:29:49 [debug] 26416#0: *1 epoll add event: fd:13 op:3 ev:80000005 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "Host: " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script var: "ireedr.com" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: " " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "X-Real-IP: " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script var: "106.187.97.22" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: " " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "Connection: close " 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "Accept-Encoding: identity, deflate, compress, gzip" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "Accept: */*" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "User-Agent: Android/1.0" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "POST //login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... Maybe post rewrite phase had rewrite the request. Anybody can help me to solve this problem or know why nginx do that ? Much appreciated.

    Read the article

  • How to resolve high CPU + excessive stat("/etc/localtime") and clock_gettime(CLOCK_REALTIME) calls

    - by Yemster
    I've been experiencing really high CPU on a ruby on rails app (see stack below) and have been trying to diagnose the possible causes to no avail. Stack: ruby 1.9.3 rails 3.2.6 Apache/2.2.21 (Debian) Phusion Passenger 3.0.11 Whenever I run strace against the spiking Rack process PID (see Top excerpt below), I am seeing a tonne of stat("/etc/localtime") and clock_gettime(CLOCK_REALTIME) calls and have no idea how to stop these. Excerpt from Top showin running PID: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11674 www-user 20 0 313m 182m 5076 R 99 2.3 63:04.60 Rack: /var/www/my_rails_app/current 11634 www-user 20 0 411m 216m 5144 S 10 2.7 197:55.63 Rack: /var/www/my_rails_app/current Strace snippet below: [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 141474018}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 141577456}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 143073982}) = 0 [pid 11674] poll([{fd=15, events=POLLIN|POLLPRI}], 1, 0) = 0 (Timeout) [pid 11674] write(15, "b\0\0\0\3SELECT `images`.* FROM `ima"..., 102) = 102 [pid 11674] read(15, "\1\0\0\1\0229\0\0\2\3def\23myappy_productio"..., 16384) = 2063 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 144138035}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 ... [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=118, ...}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 154076443}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 154189429}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 157185700}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 157298770}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 165076003}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 165212572}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 167542679}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354058955, 167683436}) = 0 .... [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 62052248}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 62182486}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 62919948}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 63057266}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 63751707}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 73730686}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 75874687}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 76077133}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 78205019}) = 0 ... [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 89370879}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 89583247}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 91637614}) = 0 [pid 11674] clock_gettime(CLOCK_REALTIME, {1354060036, 91782149}) = 0 Have Google'd around and came across a number of suggestions which I've tried with no success. Things tried so far: Have tried setting time zone as recommended here Made no difference and issue still persists. Content of my /etc/localtime: TZif2UTCTZif2UTC UTC0 Have tried the recommended fix for the leapsecond bug: date -s 'date' No joy so far. I'm fresh out of ideas so any help/advice on how to diagnose or resolve would be greatly appreciated.

    Read the article

  • What would cause an .exe to vanish without a trace?

    - by Peter pete
    I have a few computers. One computer, at home, one day suddenly had its pgbouncer.exe vanish. The antivirus didn't have it in its virus chest [avast]. I couldn't find the bgbouncer anywhere. All the other pgbouncer files remained where they used to be, except the exe had vanished. I hadn't uninstalled it, nor had anyone else used the machine. I hadn't installed any new software since the previous time I had used it either. Just now, however, my TV computer was running out of disk space, which was weird because I had python setup to do transcoding and archiving. I logged in and voila! python.exe had vanished !! Once again, Avast.exe didn't have it in its virus chest, and dunno! I do this time, with the TV computer, know exactly the date that python vanished. Sat the 18th my python scripts ran fine. Sat the 19th python was gone. I'm going to do some hunting in the event log to see what happened that day. But if anyone has experienced vanishings before and has a clue what happened, I would like to know. FYI: Both computers (bgbouncer vanish and python vanish) were running Win7 with the RDP hack and both on SSDs and both with Avast. Both computers had all windows updates set to manual(to prevent random stuff changing!) and neither had recently had any windows updates manually applied. FYI2: Tv computer had since the beginning of October dropbox running all the time trying to download two files. Sadly, a temporary download of each of these two files by dropbox resulted in Avast freaking out and virus-chesting them, and then dropbox downloading them partially again, before being dropboxed. Now, these two files were binaries from a program I had personally written and were clean on other machines. Since the python vanishing I have deleted these two binaries from dropbox (using the website) and dropbox exe on the tv computer is now at peace. I don't think this should cause python.exe to vanish though :/ New edit: On the 18/10/2013, at 0742 my python script ended with an error: "file still in use" which was unexpected but I shrugged it off since sometimes media portal doesn't release the recording. But on second thoughts is weird since the show in use would have been finished recording the day before. On the 18/10/2013 at 0807 the windows event log complains that several drivers required for CutePDF, send to onenote 2010, send to onenote 2013, microsoft xps document writer aren't installed. I just checked now and indeed those printers have vanished! New update! I found my python.exe that had been removed. It was still in the C:\Python33\ directory except it had been renamed to a random string charater.tmp (ie, it was made into a temp file) with a creation date of 19/10/2013 at 0600:02 am. Now, the computer normally wakes at 6am to do transcoding. What could have moved my python file into a tmp file?

    Read the article

  • Email sent from Centos end up in user spam folder

    - by oObe
    I am facing this issue, I use the default postfix MTA in centos but the mail end up in user spam folder, but this does not seem to be a problem in Debian using exim4, both host have hostname and domain name configured, and relay mail through external smtp host. Both configuration and recieving email header are attached. The different seems that Debian has this additional (envelope tag) and (from) tag other than some minor syntax differences. Any help to resolve is appreciated. The IP address and DNS is masked as follow: 1.2.3.4 = My IP address smtp.host.com = external smtp host for my company [email protected] = account at smtp host centos.abc.com = Local centos server debian.abc.com = Local debian server Thanks. Centos main.cf config with the following params configured myhostname = centos.abc.com mydomain = abc.com myorigin = centos.abc.com relayhost = smtp.host.com Centos - User receiving mail header Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 13:36:49 +0800 Received: by centos.abc.com (Postfix, from userid 0) id 1E0637B89; Fri, 28 Sep 2012 13:36:39 +0800 (SGT) Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 13:36:49 +0800 Received: by centos.abc.com (Postfix, from userid 0) id 1E0637B89; Fri, 28 Sep 2012 13:36:39 +0800 (SGT) Date: Fri, 28 Sep 2012 13:36:39 +0800 To: [email protected] Subject: Test mail from centos User-Agent: Heirloom mailx 12.4 7/29/08 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-Id: <[email protected]> From: [email protected] (root) X-SmarterMail-TotalSpamWeight: 0 X-Antivirus: avast! (VPS 120926-1, 27/09/2012), Inbound message X-Antivirus-Status: Clean http://i.imgur.com/7WAYX.jpg Debain exim4 config .... # This is a Debian specific file dc_eximconfig_configtype='smarthost' dc_other_hostnames='debian.abc.com' dc_local_interfaces='127.0.0.1 ; ::1' dc_readhost='debian.abc.com' dc_relay_domains='smtp.host.com' dc_minimaldns='false' dc_relay_nets='127.0.0.1' dc_smarthost='smtp.host.com' CFILEMODE='644' dc_use_split_config='false' dc_hide_mailname='true' dc_mailname_in_oh='true' dc_localdelivery='mail_spool' debian - User receiving mail header Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 15:02:53 +0800 Received: from root by debian.abc.com with local (Exim 4.72) (envelope-from <[email protected]>) id 1TH86d-00010v-G9 for [email protected]; Thu, 27 Sep 2012 15:01:55 +0800 Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 15:02:53 +0800 Received: from root by debian.abc.com with local (Exim 4.72) (envelope-from <[email protected]>) id 1TH86d-00010v-G9 for [email protected]; Thu, 27 Sep 2012 15:01:55 +0800 Date: Thu, 27 Sep 2012 15:01:55 +0800 Message-Id: <[email protected]> To: [email protected] Subject: Test from debian From: root <[email protected]> X-SmarterMail-TotalSpamWeight: 0 X-Antivirus: avast! (VPS 120926-1, 27/09/2012), Inbound message X-Antivirus-Status: Clean http://imgur.com/nMsMA.jpg

    Read the article

  • What's wrong with this vcl config for varnish-cache as load balancer?

    - by dabito
    I have the current configurations active on my default.vcl varnish file on the machine that balances the load for other two machines (the other two machines also have varnish active). My intention is to have this server do only the load balancing and the other machines do the processing and also their own caching. My problem is that even with the config testing (not even a stress test or anything, just a few requests a minute) I get the guru meditation error and have to restart varnish. This is the default.vcl for the load balancing server: backend vader { .host = "app1.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } backend malgus { .host = "app2.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } director dooku round-robin { { .backend = vader; } { .backend = malgus; } } sub vcl_recv { if (req.http.host ~ "^balancer.server.com$") { set req.backend = dooku; } } Am I doing something wrong or missing something? EDIT: This is varnishlog's output: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839995 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839998 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840001 1.0 0 Backend_health - malgus Still sick 4--X--- 0 3 5 0.000000 3.846876 0 Backend_health - vader Still sick 4--X--- 0 3 5 0.000000 3.839194 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840004 1.0 14 SessionOpen c 10.150.7.151 38272 :80 14 ReqStart c 10.150.7.151 38272 458200540 14 RxRequest c GET 14 RxURL c / 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c / 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:44 GMT 14 TxHeader c X-Varnish: 458200540 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200540 1345840004.916415691 1345840004.965190172 0.020933390 0.048741817 0.000032663 14 SessionClose c error 14 StatSess c 10.150.7.151 38272 0 1 1 0 1 0 256 418 14 SessionOpen c 10.150.7.151 38273 :80 14 ReqStart c 10.150.7.151 38273 458200541 14 RxRequest c GET 14 RxURL c /favicon.ico 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c Accept: */* 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c /favicon.ico 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:45 GMT 14 TxHeader c X-Varnish: 458200541 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200541 1345840005.226389885 1345840005.226457834 0.000026941 0.000043154 0.000024796 14 SessionClose c error 14 StatSess c 10.150.7.151 38273 0 1 1 0 1 0 256 418

    Read the article

  • USB connection is unstable with Nexus S 2.3.4 on AMD 64 running 64-bit Windows 7, but works with 32-bit Windows Vista

    - by Mike
    The USB connection is unstable with Nexus S (Android 2.3.4) on AMD 64 running 64-bit Windows 7, but it works with 32-bit Windows Vista. Problem Description: On the 64-bit Windows 7 machine my Nexus S appears to connect, but then it disconnects moments later. Neither accessing USB storage or loading an Android application package file (APK) using the Android Debug Bridge (ADB) work. On 32-bit Windows Vista using the same USB cable, USB storage works. I haven't tried the ADB on 32-bit Windows Vista. Reproduction steps for USB storage: (I have provided the reproduction steps for USB storage and not ADB, because if one isn't working, then the other isn't working either and the USB storage reproduction steps are shorter to document.) Connect the USB cable to the Nexus S and my Windows 7 machine. Effect: The "USB Mass Storage, USB Connected" dialog appears with the button "Turn on USB storage." Click "Turn on USB Storage" Effect: The "working circle" appears. A dialog briefly appears saying "USB storage in use," then it either returns me to Step 1 (now that I am running 2.3.4) or is replaced with the Nexus S's application homepage (while I was running 2.3.3). I'm not sure if the version matters, but I mention it for completeness. On the 32-bit Windows Vista machine the connection is stable. I am able to navigate through the Nexus S file system create, read, update, and delete files, etc. I haven't tried connecting with the ADB. Troubleshooting summary: Tried and failed: Uninstalling and reinstalling the Android USB drivers including removing the files. Uninstalling my custom software Pulling the Nexus S's battery Restarting the Nexus S Restarting 64-bit Windows 7 Changing USB ports on the 64-bit Windows 7 box Compared the dates and file size on the DLLs in my google-usb_driver\amd64 directory and the windows\System32 directory. They match. The sizes for the google-usb_driver\i386 directory do not match (expected). Turning off Debugging mode on the Nexus S does not resolve the problem. Searching Google. Tried and succeeded: Connecting to another machine (Windows Vista) using the same USB cable and Nexus S phone. Troubleshooting observations: I notice that uninstalling the device drivers and deleting the files, then reinstalling the drivers, then rebooting 64-bit Windows 7 then unplugging the Nexus S, then plugging it back in occasionally helps for a short amount of time (minutes to hours, not days). When it is working, I can both access the Nexus S's drive and load/test applications using the ADB. I have observed some wonky behavior in the Device Manager that I haven't tracked down. Sometimes the black Nexus S image appears in the list of devices. Sometimes the image displays as a computer with a green ISA card. Sometimes it neither appears on the top level of devices nor under “other devices,” but it does appear under "disk drives" as "Android UMS Composite USB Device." System configuration: The Nexus S is running Android OS 2.3.4's "Settings\about phone\System updates" indicates that it is up to date as of May 21st 2011. Both 32-bit Windows Vista and 64-bit Windows 7 are up to date. The Windows Vista system is running on an Intel 32-bit processor. Windows 7 is running on an AMD 64-bit processor. I have done Android development on both systems, but I usually develop on the 64-bit Windows 7 machine.

    Read the article

  • Is there a bug with Apache 2.2 and content filters (and maybe mod_proxy)?

    - by asciiphil
    I'm running Apache 2.2.15-29 on RHEL 6 (actually Scientific Linux 6.4) and I'm trying to set up a reverse proxy with content rewriting so all of the links on the proxied web pages are rewritten to reference the proxy host. I'm running into a problem with some of the content rewriting and I'd like to know if this is a bug or if I'm doing something wrong (and how to do it right, if applicable). I'm proxying a subdirectory on an internal host (internal.example.com/foo) onto the root of an external host (external.example.com). I need to rewrite HTML, CSS, and Javascript content to fix all of the URLs. I'm also hosting some content locally on the external host, which I don't think is a problem but I'm mentioning here for completeness. My httpd.conf looks roughly like this: <VirtualHost *:80> ServerName external.example.com ServerAlias example.com # Serve all local content directly, reverse-proxy all unknown URIs. RewriteEngine On RewriteRule ^(/(index.html?)?)?$ http://internal.example.com/foo/ [P] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -f [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^.*$ - [L] RewriteRule ^/~ - [L] RewriteRule ^(.*)$ http://internal.example.com$1 [P] # Standard header rewriting. ProxyPassReverse / http://internal.example.com/foo/ ProxyPassReverseCookieDomain internal.example.com external.example.com ProxyPassReverseCookiePath /foo/ / # Strip any Accept-Encoding: headers from the client so we can process the pages # as plain text. RequestHeader unset Accept-Encoding # Use mod_proxy_html to fix URLs in text/html content. ProxyHTMLEnable On ProxyHTMLURLMap http://internal.example.com/foo/ / ProxyHTMLURLMap http://internal.example.com/foo / ProxyHTMLURLMap /foo/ / ## Use mod_substitute to fix URLs in CSS and Javascript #<Location /> # AddOutputFilterByType SUBSTITUTE text/css # AddOutputFilterByType SUBSTITUTE text/javascript # Substitute "s|http://internal.example.com/foo/|/|nq" #</Location> # Use mod_ext_filter to fix URLs in CSS and Javascript ExtFilterDefine fixurlcss mode=output intype=text/css cmd="/bin/sed -rf /etc/httpd/fixurls" ExtFilterDefine fixurljs mode=output intype=text/javascript cmd="/bin/sed -rf /etc/httpd/fixurls" <Location /> SetOutputFilter fixurlcss;fixurljs </Location> </VirtualHost> The text/html rewriting works just fine. When I use either mod_substitute or mod_ext_filter, the external server sends the pages as Transfer-Encoding: chunked, sends all of the data, and then closes the connection without sending the final, zero-length chunk. Some HTTP clients are unhappy with this. (Chrome won't process any content sent in this way, for example, so the pages don't get CSS applied to them.) Here's a sample wget session: $ wget -O /dev/null -S http://external.example.com/include/jquery.js --2013-11-01 11:36:36-- http://external.example.com/include/jquery.js Resolving external.example.com (external.example.com)... 192.168.0.1 Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Fri, 01 Nov 2013 15:36:36 GMT Server: Apache Last-Modified: Tue, 29 Oct 2013 13:09:10 GMT ETag: "1d60026-187b8-4e9e0ec273e35" Accept-Ranges: bytes Vary: Accept-Encoding X-UA-Compatible: IE=edge,chrome=1 Content-Type: text/javascript;charset=utf-8 Connection: close Transfer-Encoding: chunked Length: unspecified [text/javascript] Saving to: `/dev/null' [ <=> ] 100,280 --.-K/s in 0.005s 2013-11-01 11:36:37 (19.8 MB/s) - Read error at byte 100280 (Success).Retrying. --2013-11-01 11:36:38-- (try: 2) http://external.example.com/include/jquery.js Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 416 Requested Range Not Satisfiable Date: Fri, 01 Nov 2013 15:36:38 GMT Server: Apache Vary: Accept-Encoding Content-Type: text/html;charset=utf-8 Content-Length: 260 Connection: close The file is already fully retrieved; nothing to do. Am I doing something wrong? Am I hitting some sort of Apache bug? What do I need to do to get it working? (Note that I'd prefer solutions that work within RHEL-6-packaged RPMs and upgrading to Apache 2.4 would be a last resort, as we have a lot of infrastructure built around 2.2 on this system at the moment.)

    Read the article

  • SharePoint - Summing Calculated Columns By Groups (DVWP)

    - by Mark Rackley
    I had a problem… okay.. okay.. so I have many problems… but let’s focus on one in particular or this blog post would never end… okay? Thank you…. So, I had an electronic timesheet where users entered hours for each day of the week. It also had a “Week Total” column which was a calculated column of the sum. The calculated column looked like this: Pretty easy.. nothing spectacular. So, what’s the problem? WELL……………….. There is a row in the timesheet for each task a person worked on in a given week. So, if you worked on 4 tasks, you would have 4 rows of data, and 4 week totals for that week: This is all fine and dandy, but I want to know what the total was for the entire week. Yes.. I realize the answer is 24 from my example… I mean, I know how to add! I just want SharePoint to display it for me for the executives (we all know, they have math problems).  You may be thinking, hey genius (in a sarcastic tone of course), why don’t you just go to the view and total on the “Week Total” field. What a brilliant idea! Why didn’t I think of that… let’s go to the view and do just that…. Ohhhhhh… you can’t total on a Calculated Column.. it’s not even an option…  Yeah… I had the same moment. So, what do you do? Well… what do you think I did? 1) Googled “SharePoint total calculated column” 2) Said it couldn’t be done 3) Took a nap 4) Asked the question on twitter? The correct answer of course is number 4… followed by number 3… although I may have told my boss number 2 so that I look more brilliant than I am? It’s safe to say I did NOT try to find the solution on my own doing step 1… that would be just WAY to easy… So, anyway, I posted the question on Twitter and it turns out several people had suggestions from using jQuery to using DVWPs. I tend to be a big fan of the DVWP except for the disgusting process of deploying them to another farm.. ugh… just shoot me…. so, that is the solution I went with. Laura Rogers (@WonderLaura) has a super duper easy to follow video on the subject over at EndUserSharePoint.com: SharePoint: Displaying Calculated Column SUMS in a View (Screencast) Laura’s video was very easy to follow and was ALMOST exactly what I needed. She does a great job walking you through every step of summing up a calculated field which was PART of my problem. The other part was my list is grouped by date! So, I wanted to see for a given week, the summed “Week Total” of hours. Laura got me on the right track with her video and I dug a little deeper into the DVWP to accomplish my task. So, here are the steps you follow: 1. Click on the "chevron” (I didn’t know it was actually called that until I heard Laura say it).. I always call it the “little-button-in-the-top-right-corner-with-the-greater-than-sign”.. but “chevron” is much shorter. So, click on the chevron, click on “Sort and Group”. The Add the field you want to group by, in my example it is the “Monday Date” of the timesheet entry. Make sure to check the check boxes for “Show Group Header” AND “Show Group Footer”. Click “OK”. The view now shows the count of each grouped set of data: Interesting, this looks very similar to Laura’s video… right? So, let’s take a look at the code for the Count: Count : <xsl:value-of select="count($nodeset)" /> Wow, also very similar… except in Laura’s video it looks like: Count : <xsl:value-of select="count($Rows)" /> So.. the only difference is that instead of $Rows we have $nodeset. It turns out the $nodeset will go through each Row in the group just like $Rows goes through each row in the entire view. So, using the exact same logic as in Laura’s blog except replacing $Rows with $nodeset we get the functionality of being able to sum up the values for a group. So, I want to replace “Count: #” with the total hours, this is done using the following changes to the above code: Week Total : <xsl:value-of select="sum($nodeset/@Monday)+sum($nodeset/@Tuesday) +sum($nodeset/@Wednesday)+sum($nodeset/@Thursday)+sum($nodeset/@Friday) +sum($nodeset/@Saturday)+sum($nodeset/@Sunday)" /> Our final output has the summed hours for each group! So… long story short… follow Laura’s blog, then group your list, then replace “$Rows” with “$nodeset”. One caveat, this will not work if you group by a person field. For some reason the person field does not go through each row in the group. I haven’t dug into this much yet. Maybe if I find some time… whatever that is… Anyway, Laura did all the work, I just took it one small step forward… as always, feel free to leave any additional insights you may have. We’re all learning here!

    Read the article

  • Tools to Help Post Content On Your WordPress Blog

    - by Matthew Guay
    Now that you’ve got a nice blog, you want to do more with it and start posting content.  Here we look at some tools that will allow you to post directly to your WordPress blog. Writing a new blog post is easy with WordPress as we saw in our previous post about Starting your own WordPress blog.  The web editor gives you a lot of features and even lets you edit your post’s source code if you enjoy hacking HTML.  There are other tools that will allow you to post content, here we look at how you can post with dedicated apps, browser plugins, and even by email. Windows Live Writer Windows Live Writer (part of the Windows Live Essentials Suite) is a great app for posting content to your blog.  This free program for Microsoft lets you post content to a variety of blogging services, including Blogger, Typepad, LiveJournal, and of course WordPress.  You can write blog posts directly from its Word-like editor, complete with pictures and advanced formatting.  Even if you’re offline, you can still write posts and save them for when you’re online again. For more information about installing Live writer, check out our article on how to Install Windows Live Essentials In Windows 7. Once Live Writer is installed, open it to add your blog.  If you already had Live Writer installed and configured for a blog, you can add your new blog, too.  Just click your blog’s name in the top right corner, and select “Add blog account”. Select “Other blog service” to add your WordPress blog to Writer, and click Next.   Enter your blog’s web address, and your username and password.  Check Remember my password so you don’t have to enter it every time you write something. Writer will analyze your blog and setup your account. During the setup process it may ask to post a temporary post.  This will let you preview blog posts using your blog’s real theme, which is helpful, so click Yes. Finally, add your Blog’s name, and click Finish. You can now use the rich editor to write and add content to a new blog post.   Select the Preview tab to see how your post will look on your blog… Or, if you’re a HTML geek, select the Source tab to edit the code of your blog post. From the bottom of the window, you can choose categories, insert tags, and even schedule the post to publish on a different day.  Live Writer is fully integrated with WordPress; you’re not missing anything by using the desktop editor. If you want to edit a post you’ve already published, click the Open button and select the post.  You can chose and edit any post, including ones you published via the web interface or other editors. Add Multimedia Content to your Posts with Live Writer Back in the Edit tab, you can add pictures, videos and more from the sidebar.  Select what you want to insert. Pictures If you insert a picture, you can add many nice borders and designs to it. Or, you can even add artistic effects from the Effects tab in the sidebar. Photo Gallery If you want to post several pictures, say some of your vacation shots, then inserting a picture gallery may be the best option.  Select Insert Photo Gallery in the sidebar, and then choose the pictures you want in the gallery. Once the gallery is inserted, you can choose from several styles to showcase your pictures. When you post the blog, you will be asked to sign in with your Windows Live ID as the gallery pictures will be stored in the free Skydrive storage service. Your blog readers can see the preview of your pictures directly on your blog, and then can view each individual picture, download them, or see a slideshow online via the link. Video If you want to add a video to your blog post, select Video from the sidebar as above.  You can select a video that’s already online, or you can choose a new video from file and upload it via YouTube directly from Windows Live Writer.   Note that you will have to sign in with your YouTube account to upload videos to YouTube, so if you’re not logged in you’ll be prompted to do so when you click Insert. Geek Tip:  If you ever want to copy your Live Writer settings to another computer, check out our article on how to Backup Your Windows Live Writer Settings. Microsoft Office Word Word 2007 and 2010 also let you post content directly to your blog.  This is especially nice if you’ve already typed up a document and think it would be good on your Blog as well.  Check out our in-depth tutorial on posting blog posts via Word 2007 using Word 2007 as a blogging tool. This works in Word 2010 too, except the Office Orb has been replaced by the new Backstage view.  So, in Word 2010, to start a new blog post, click File \ New then select Blog post.  Proceed as you would in Word 2007 to add your blog settings and post the content you want. Or, if you’ve already written a document and want to post it, select File \ Share (or Save and Send in the final version of Word 2010), and then click Publish as Blog Post.  If you haven’t setup your blog account yet, set it up as shown in the Word 2007 article. Post Via Email Most of us use email daily, and already have our favorite email app or service.  Whether on your desktop or mobile phone, it’s easy to create rich emails and add content.  WordPress lets you generate a unique email address that you can use to easily post content and email to your blog.  Just compose your email with the subject as the title of your post, and send it to this unique address.  Your new post will be up in minutes. To active this feature, click the My Account button in the top menu bar in your WordPress.com account, and select My Blogs. Click the Enable button under Post by Email beside your blog’s name.   Now you’ll have a private email you can use to post to your blog.  Anything you send to this email will be posted as a new post.  If you think your email may be compromised, click Regenerate to get a new publishing email address. Any email program or webapp now is a blog post editor.  Feel free to use rich formatting or insert pictures; it all comes through great.  This is also a great way to post to your blog from your mobile device.  Whether you’re using webmail or a dedicated email client on your phone, you can now blog from anywhere.   Mobile Applications WordPress also offer dedicated applications for blogging directly from your mobile device.  You can write new posts, edit existing ones, and manage comments all from your Smartphone.  Currently they offer apps for iPhone, Android, and Blackberry.  Check them out at the link below. Conclusion Whether you want to write from your browser or email a post to your blog, WordPress is flexible enough to work right along with your preferences.  However you post, you can be sure that it will look professional and be easily accessible with your WordPress blog. Download Windows Live Writer Download WordPress apps for your mobile device Similar Articles Productive Geek Tips Quick Tip: Set a Future Date for a Post in WordPressAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogFuture Date a Post in Windows Live WriterHow To Start Your Own Professional Blog with WordPressUsing Word 2007 as a Blogging Tool TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • NHibernate 2 Beginner's Guide Review

    - by Ricardo Peres
    OK, here's the review I promised a while ago. This is a beginner's introduction to NHibernate, so if you have already some experience with NHibernate, you will notice it lacks a lot of concepts and information. It starts with a good description of NHibernate and why would we use it. It goes on describing basic mapping scenarios having primary keys generated with the HiLo or Identity algorithms, without actually explaining why would we choose one over the other. As for mapping, the book talks about XML mappings and provides a simple example of Fluent NHibernate, comparing it to its XML counterpart. When it comes to relations, it covers one-to-many/many-to-one and many-to-many, not one-to-one relations, but only talks briefly about lazy loading, which is, IMO, an important concept. Only Bags are described, not any of the other collection types. The log4net configuration description gets it's own chapter, which I find excessive. The chapter on configuration merely lists the most common properties for configuring NHibernate, both in XML and in code. Querying only talks about loading by ID (using Get, not Load) and using Criteria API, on which a paging example is presented as well as some common filtering options (property equals/like/between to, no examples on conjunction/disjunction, however). There's a chapter fully dedicated to ASP.NET, which explains how we can use NHibernate in web applications. It basically talks about ASP.NET concepts, though. Following it, another chapter explains how we can build our own ASP.NET providers using NHibernate (Membership, Role). The available entity generators for NHibernate are referred and evaluated on a chapter of their own, the list is fine (CodeSmith, nhib-gen, AjGenesis, Visual NHibernate, MyGeneration, NGen, NHModeler, Microsoft T4 (?) and hbm2net), examples are provided whenever possible, however, I have some problems with some of the evaluations: for example, Visual NHibernate scores 5 out of 5 on Visual Studio integration, which simply does not exist! I suspect the author means to say that it can be launched from inside Visual Studio, but then, what can't? Finally, there's a chapter I really don't understand. It seems like a bag where a lot of things are thrown in, like NHibernate Burrow (which actually isn't explained at all), Blog.Net components, CSS template conversion and web.config settings related to the maximum request length for file uploads and ending with XML configuration, with the help of GhostDoc. Like I said, the book is only good for absolute beginners, it does a fair job in explaining the very basics, but lack a lot of not-so-basic concepts. Among other things, it lacks: Inheritance mapping strategies (table per class hierarchy, table per class, table per concrete class) Load versus Get usage Other usefull ISession methods First level cache (Identity Map pattern) Other collection types other that Bag (Set, List, Map, IdBag, etc Fetch options User Types Filters Named queries LINQ examples HQL examples And that's it! I hope you find this review useful. The link to the book site is https://www.packtpub.com/nhibernate-2-x-beginners-guide/book

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What’s New in Delphi XE6 Regular Expressions

    - by Jan Goyvaerts
    There’s not much new in the regular expression support in Delphi XE6. The big change that should be made, upgrading to PCRE 8.30 or later and switching to the pcre16 functions that use UTF-16, still hasn’t been made. XE6 still uses PCRE 7.9 and thus continues to require conversion from the UTF-16 strings that Delphi uses natively to the UTF-8 strings that older versions of PCRE require. Delphi XE6 does fix one important issue that has plagued TRegEx since it was introduced in Delphi XE. Previously, TRegEx could not find zero-length matches. So a regex like (?m)^ that should find a zero-length match at the start of each line would not find any matches at all with TRegEx. The reason for this is that TRegEx uses TPerlRegEx to do the heavy lifting. TPerlRegEx sets its State property to [preNotEmpty] in its constructor, which tells it to skip zero-length matches. This is not a problem with TPerlRegEx because users of this class can change the State property. But TRegEx does not provide a way to change this property. So in Delphi XE5 and prior, TRegEx cannot find zero-length matches. In Delphi XE6 TPerlRegEx’s constructor was changed to initialize State to the empty set. This means TRegEx is now able to find zero-length matches. TRegex.Replace() using the regex (?m)^ now inserts the replacement at the start of each line, as you would expect. If you use TPerlRegEx directly, you’ll need to set State to [preNotEmpty] in your own code if you relied on its behavior to skip zero-length matches. You will need to check existing applications that use TRegEx for regular expressions that incorrectly allow zero-length matches. In XE5 and prior, TRegEx using \d* would match all numbers in a string. In XE6, the same regex still matches all numbers, but also finds a zero-length match at each position in the string. RegexBuddy 4 warns about zero-length matches on the Create panel if you set it to Detailed mode. At the bottom of the regex tree there will be a node saying either “your regular expression may find zero-length matches” or “zero-length matches will be skipped” depending on whether your application allows zero-length matches (XE6 TRegEx) or not (XE–XE5 TRegEx).

    Read the article

  • Improving CSS With .LESS

    Cascading Style Sheets, or CSS, is a syntax used to describe the look and feel of the elements in a web page. CSS allows a web developer to separate the document content - the HTML, text, and images - from the presentation of that content. Such separation makes the markup in a page easier to read, understand, and update; it can result in reduced bandwidth as the style information can be specified in a separate file and cached by the browser; and makes site-wide changes easier to apply. For a great example of the flexibility and power of CSS, check out CSS Zen Garden. This website has a single page with fixed markup, but allows web developers from around the world to submit CSS rules to define alternate presentation information. Unfortunately, certain aspects of CSS's syntax leave a bit to be desired. Many style sheets include repeated styling information because CSS does not allow the use of variables. Such repetition makes the resulting style sheet lengthier and harder to read; it results in more rules that need to be changed when the website is redesigned to use a new primary color. Specifying inherited CSS rules, such as indicating that a elements (i.e., hyperlinks) in h1 elements should not be underlined, requires creating a single selector name, like h1 a. Ideally, CSS would allow for nested rules, enabling you to define the a rules directly within the h1 rules. .LESS is a free, open-source port of Ruby's LESS library. LESS (and .LESS, by extension) is a parser that allows web developers to create style sheets using new and improved language features, including variables, operations, mixins, and nested rules. Behind the scenes, .LESS converts the enhanced CSS rules into standard CSS rules. This conversion can happen automatically and on-demand through the use of an HTTP Handler, or done manually as part of the build process. Moreover, .LESS can be configured to automatically minify the resulting CSS, saving bandwidth and making the end user's experience a snappier one. This article shows how to get started using .LESS in your ASP.NET websites. Read on to learn more! Read More >

    Read the article

  • Integrating Flickr with ASP.Net application

    - by sreejukg
    Flickr is the popular photo management and sharing application offered by yahoo. The services from flicker allow you to store and share photos and videos online. Flicker offers strong API support for almost all services they provide. Using this API, developers can integrate photos to their public website. Since 2005, developers have collaborated on top of Flickr's APIs to build fun, creative, and gorgeous experiences around photos that extend beyond Flickr. In this article I am going to demonstrate how easily you can bring the photos stored on flicker to your website. Let me explain the scenario this article is trying to address. I have a flicker account where I upload photos and share in many ways offered by Flickr. Now I have a public website, instead of re-upload the photos again to public website, I want to show this from Flickr. Also I need complete control over what photo to display. So I went and referred the Flickr documentation and there is API support ready to address my scenario (and more… ). FlickerAPI for ASP.Net To Integrate Flicker with ASP.Net applications, there is a library available in CodePlex. You can find it here http://flickrnet.codeplex.com/ Visit the URL and download the latest version. The download includes a Zip file, when you unzip you will get a number of dlls. Since I am going to use ASP.Net application, I need FlickrNet.dll. See the screenshot of all the dlls, and there is a help file available in the download (.chm) for your reference. Once you have the dll, you need to use Flickr API from your website. I assume you have a flicker account and you are familiar with Flicker services. Arrange your photos using Sets in Flickr In flicker, you can define sets and add your uploaded photos to sets. You can compare set to photo album. A set is a logical collection of photos, which is an excellent option for you to categorize your photos. Typically you will have a number of sets each set having few photos. You can write application that brings photos from sets to your website. For the purpose of this article I already created a set Flickr and added some photos to it. Once you logged in to Flickr, you can see the Sets under the Menu. In the Sets page, you will see all the sets you have created. As you notice, you can see certain sample images I have uploaded just to test the functionality. Though I wish I couldn’t create good photos so please bear with me. I have created 2 photo sets named Blue Album and Red Album. Click on the image for the set, will take you to the corresponding set page. In the set “Red Album” there are 4 photos and the set has a unique ID (highlighted in the URL). You can simply retrieve the photos with the set id from your application. In this article I am going to retrieve the images from Red album in my ASP.Net page. For that First I need to setup FlickrAPI for my usage. Configure Flickr API Key As I mentioned, we are going to use Flickr API to retrieve the photos stored in Flickr. In order to get access to Flickr API, you need an API key. To create an API key, navigate to the URL http://www.flickr.com/services/apps/create/ Click on Request an API key link, now you need to tell Flickr whether your application in commercial or non-commercial. I have selected a non-commercial key. Now you need to enter certain information about your application. Once you enter the details, Click on the submit button. Now Flickr will create the API key for your application. Generating non-commercial API key is very easy, in couple of steps the key will be generated and you can use the key in your application immediately. ASP.Net application for retrieving photos Now we need write an ASP.Net application that display pictures from Flickr. Create an empty web application (I named this as FlickerIntegration) and add a reference to FlickerNet.dll. Add a web form page to the application where you will retrieve and display photos(I have named this as Gallery.aspx). After doing all these, the solution explorer will look similar to following. I have used the below code in the Gallery.aspx page. The output for the above code is as follows. I am going to explain the code line by line here. First it is adding a reference to the FlickrNet namespace. using FlickrNet; Then create a Flickr object by using your API key. Flickr f = new Flickr("<yourAPIKey>"); Now when you retrieve photos, you can decide what all fields you need to retrieve from Flickr. Every photo in Flickr contains lots of information. Retrieving all will affect the performance. For the demonstration purpose, I have retrieved all the available fields as follows. PhotoSearchExtras.All But if you want to specify the fields you can use logical OR operator(|). For e.g. the following statement will retrieve owner name and date taken. PhotoSearchExtras extraInfo = PhotoSearchExtras.OwnerName | PhotoSearchExtras.DateTaken; Then retrieve all the photos from a photo set using PhotoSetsGetPhotos method. I have passed the PhotoSearchExtras object created earlier. PhotosetPhotoCollection photos = f.PhotosetsGetPhotos("72157629872940852", extraInfo); The PhotoSetsGetPhotos method will return a collection of Photo objects. You can just navigate through the collection using a foreach statement. foreach (Photo p in photos) {     //access each photo properties } Photo class have lot of properties that map with the properties from Flickr. The chm documentation comes along with the CodePlex download is a great asset for you to understand the fields. In the above code I just used the following p.LargeUrl – retrieves the large image url for the photo. p.ThumbnailUrl – retrieves the thumbnail url for the photo p.Title – retrieves the Title of the photo p.DateUploaded – retrieves the date of upload Visual Studio intellisense will give you all properties, so it is easy, you can just try with Visual Studio intellisense to find the right properties you are looking for. Most of hem are self-explanatory. So you can try retrieving the required properties. In the above code, I just pushed the photos to the page. In real time you can use the retrieved photos along with JQuery libraries to create animated photo galleries, slideshows etc. Configuration and Troubleshooting If you get access denied error while executing the code, you need to disable the caching in Flickr API. FlickrNet cache the photos to your local disk when retrieved. You can specify a cache folder where the application need write permission. You can specify the Cache folder in the code as follows. Flickr.CacheLocation = Server.MapPath("./FlickerCache/"); If the application doesn’t have have write permission to the cache folder, the application will throw access denied error. If you cannot give write permission to the cache folder, then you must disable the caching. You can do this from code as follows. Flickr.CacheDisabled = true; Disabling cache will have an impact on the performance. Take care! Also you can define the Flickr settings in web.config file.You can find the documentation here. http://flickrnet.codeplex.com/wikipage?title=ExampleConfigFile&ProjectName=flickrnet Flickr is a great place for storing and sharing photos. The API access allows developers to do seamless integration with the photos uploaded on Flickr.

    Read the article

  • 10 PowerShell One Liners

    - by BizTalk Visionary
    Here are a few one-liners that use NetCmdlets. Some of these I've blogged about before, some are new. Let me know if you have questions, which ones you find useful, or how you altered these to suit your own needs. Send email to a list of recipient addresses: import-csv users.csv | % { send-email -to $_.email -from [email protected] -subject "Important Email" –message "Hello World!" -server 10.0.1.1 } Show the access control list for a specific Exchange folder: get-imap -server $mymailserver -cred $mycred -folder INBOX.RESUMES –acl Add look and read permissions on an Exchange folder, for a list of accounts pulled from a CSV file: import-csv users.csv | % { set-imap -server -acluser $_.username $mymailserver -cred $mycred -folder INBOX.RESUMES –acl “lr”  } Sync system time with an Internet time server: get-time -server clock.psu.edu –set To remotely sync the time on a set of computers: import-csv computers.csv | % { Invoke-Command -computerName $_.computer -cred $mycred -scriptblock { get-time -server clock.psu.edu –set } } Delete all emails from an Exchange folder that match a certain criteria.  For example, delete all emails from [email protected]: get-imap -server $mailserver –cred $mycred | ? {$_.FromEmail -eq [email protected]} | %{ set-imap -server $mailserver –cred $mycred-message $_.Id -delete } Update Twitter status from PowerShell: get-http –url "http://twitter.com/statuses/update.xml" –cred $mycred -variablename status -variablevalue "Tweeting with NetCmdlets!" A test-path that works over FTP, FTPS (SSL), and SFTP (SSH) connections: get-ftp -server $remoteserver –cred $mycred -path /remote/path/to/checkfor* Don't forget the *.  Also, to use SSL or SSH just add an –ssl or –ssh parameter. List disabled user accounts in Active Directory (or any other LDAP server): get-ldap -server $ad -cred $mycred -dn dc=yourdc -searchscope wholesubtree     -search "(&(objectclass=user)(objectclass=person)(company=*)(userAccountControl:1.2.840.113556.1.4.803:=2))" List Active Directory groups and their members: get-ldap -server testman -cred $mycred -dn dc=NS2 -searchscope wholesubtree -search "(&(objectclass=group)(cn=*admin*))" | select ResultDN, member Display the last initialization time (e.g. last reboot time) of all discoverable SNMP agents on a network: import-csv computers.csv | % { get-snmp -agent $_.computer -oid sysUpTime.0 | %{([datetime]::Now).AddSeconds(-($_.OIDValue/100))} } Not mentioned here:  data conversion (Yenc, QP, UUencoding, MD5, SHA1, base64, etc), DNS, News Groups (NNTP/UseNet), POP mail, RSS feeds, Amazon S3, Syslog, TFTP, TraceRoute, SNMP Traps, UDP, WebDAV, whois, Rexec/Rshell/Telnet, Zip files, sending IMs (Jabber/GoogleTalk/XMPP), sending text messages and pages, ping, and more. Original Source: Lance's Textbox

    Read the article

  • Good Scoop: The PeopleSoft/IBM Backstory

    - by Brian Dayton
    Sometimes you're searching for something online and you find an unrelated, bonus nugget. Last week I stumbled across an interesting blog post from Chris Heller of a PeopleSoft consulting shop in San Ramon, CA called Grey Sparling. I don't know these guys. But Chris, who apparently used to work on the PeopleTools team, wrote a great article on a pre-acquisition, would-be deal between IBM and PeopleSoft that would have standardized PeopleSoft on IBM technology. The behind-the-scenes perspective is interesting. His commentary on the challenges that the company and PeopleSoft customers would have encountered if the deal had gone through was also interesting: ·         "No common ownership. It's hard enough to get large groups of people to work together when they work for the same company, but with two separate companies it is much, much harder. Even within Oracle, progress on Fusion applications was slow until Thomas Kurian took over Fusion applications in addition to Fusion middleware." ·         "No customer buy-in. PeopleSoft customers weren't asking for a conversion to WebSphere, so the fact that doing that could have helped PeopleSoft stay independent wouldn't have meant much to them, especially since the cost of moving to whatever a "PeopleSoft built on WebSphere" would have been significant." ·         "No executive buy-in. This is related to the previous point, but it's worth calling out separately. If Oracle had walked away and the deal with IBM had gone through, and PeopleSoft customers got put through the wringer as part of WebSphere move, all of the PeopleSoft project teams would be put in the awkward position of explaining to their management why these additional costs and headaches were happening. Essentially they would need to "sell" the partnership internally to their own management team. That's not a fun conversation to have." I'm not surprised that something like this was in the works. But I did find the inside scoop and Heller's perspective on the challenges particularly interesting. Especially the advantages of aligning development of applications and infrastructure development under one roof. Here's a link to the whole blog entry.  

    Read the article

  • Silverlight Cream for April 20, 2010 -- #842

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Svetla Stoycheva, Alexey Zakharov, Chris Rouw, David Anson(-2-), Bill Reiss, John Papa and Adam Kinney, Chris Klug, CorrinaB, and Mike Snow. Shoutouts: Pete Brown interviewed David Kelley at MIX10: Pete at MIX10: David Kelley on the Prototype WPF and Silverlight Retail Experience Pete Brown also interviewed Emil Stoychev at MIX10: Pete at MIX10: Emil Stoychev on the CompletIT Silverlight Site SilverlightShow has a MIX10 Review by SilverlightShow Live Reporter Cigdem Patlak SilverlightShow also has an Interview with SilverlightShow Article Author Andrej Tozon From SilverlightCream.com: Implementing Push Notifications in Windows Phone 7 Zoltan Arvai has a post up on SilverlightShow discussing Push Notification on WP7 ... what it is, and how to use it. Completit.com - the challenges behind building a corporate website in Silverlight Svetla Stoycheva shows off the new CompleteIT corporate website which is pretty darn cool... and disucusses some of the challenges and solutions Introducing to Halcyone - Silverlight Application Framework: Silverlight Rest Extensions Alexey Zakharov has a tutorial up on a Silverlight application framework he's working on called Halcyone which is available on CodePlex Using the Tag Property during Silverlight Binding Chris Rouw details his SL3 to SL4 conversion and some issues he had, and how he was able to resolve a binding problem using the tag property. Using ContextMenu to implement SplitButton and MenuButton for Silverlight (or WPF) David Anson has a cool discussion up of using the ContextMenu code he put up previously to build a Split button, and includes all the code as usual. Silverlight/WPF Data Visualization Development Release 4 and Windows Phone 7 Charting sample! David Anson updated his Data Visualization because of the new releases, and this time he's including WP7... charting in WP7... ! Space Rocks game step 10: More fun with rocks In episode 10, Bill Reiss shows how to deal with multiple asteroids and all the interaction. Silverlight Training Course (Silverlight 4) Get your serious Silverlight 4 Mojo on with a new SL4 Training kit on Channel 9 ... buncha folks, spearheaded (it looks like) by John Papa and Adam Kinney... Plug-ins and composite applications in Silverlight – pt 3 Chris Klug is back with part 3 of his series on extensions and plug-in loading. So far he's covered a roll-your-own concept and MEF, now he digs into Prism. Transitions, Animations, and Effects with Blend - Part One How cool to have CorrinaB speak at your User Group meeting! ... She did just that in Portland, and instead of simply dropping a deck and some code in her blog, she's giving the run-down on her presentation... always good stuff, Corrina! Tip of the Day #110 – Using Static Resources in Class Libraries Mike Snow's latest tip is about how to create and use a Resource Dictionary. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Improving CSS With .LESS

    Cascading Style Sheets, or CSS, is a syntax used to describe the look and feel of the elements in a web page. CSS allows a web developer to separate the document content - the HTML, text, and images - from the presentation of that content. Such separation makes the markup in a page easier to read, understand, and update; it can result in reduced bandwidth as the style information can be specified in a separate file and cached by the browser; and makes site-wide changes easier to apply. For a great example of the flexibility and power of CSS, check out CSS Zen Garden. This website has a single page with fixed markup, but allows web developers from around the world to submit CSS rules to define alternate presentation information. Unfortunately, certain aspects of CSS's syntax leave a bit to be desired. Many style sheets include repeated styling information because CSS does not allow the use of variables. Such repetition makes the resulting style sheet lengthier and harder to read; it results in more rules that need to be changed when the website is redesigned to use a new primary color. Specifying inherited CSS rules, such as indicating that a elements (i.e., hyperlinks) in h1 elements should not be underlined, requires creating a single selector name, like h1 a. Ideally, CSS would allow for nested rules, enabling you to define the a rules directly within the h1 rules. .LESS is a free, open-source port of Ruby's LESS library. LESS (and .LESS, by extension) is a parser that allows web developers to create style sheets using new and improved language features, including variables, operations, mixins, and nested rules. Behind the scenes, .LESS converts the enhanced CSS rules into standard CSS rules. This conversion can happen automatically and on-demand through the use of an HTTP Handler, or done manually as part of the build process. Moreover, .LESS can be configured to automatically minify the resulting CSS, saving bandwidth and making the end user's experience a snappier one. This article shows how to get started using .LESS in your ASP.NET websites. Read on to learn more! Read More >

    Read the article

  • SQLAuthority News – SQL Server 2012 – Microsoft Learning Training and Certification

    - by pinaldave
    Here is the conversion I had right after I had posted my earlier blog post about Download Microsoft SQL Server 2012 RTM Now. Rajesh: So SQL Server is available for me to download? Pinal: Yes, sure check the link here. Rajesh: It is trial do you know when it will be available for everybody? Pinal: I think you mean General Availability (GA) which is on April 1st, 2012. Rajesh: I want to have head start with SQL Server 2012 examination and I want to know every single Exam 70-461: Querying Microsoft SQL Server 2012 This exam is intended for SQL Server database administrators, implementers, system engineers, and developers with two or more years of experience who are seeking to prove their skills and knowledge in writing queries. Exam 70-462: Administering Microsoft SQL Server 2012 Databases This exam is intended for Database Professionals who perform installation, maintenance, and configuration tasks as their primary areas of responsibility. They will often set up database systems and are responsible for making sure those systems operate efficiently. Exam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012 The primary audience for this exam is Extract Transform Load (ETL) and Data Warehouse Developers.  They are most likely to focus on hands-on work creating business intelligence (BI) solutions including data cleansing, ETL, and Data Warehouse implementation. Exam 70-464: Developing Microsoft SQL Server 2012 Databases This exam is intended for database professionals who build and implement databases across an organization while ensuring high levels of data availability. They perform tasks including creating database files, creating data types and tables,  planning, creating, and optimizing indexes, implementing data integrity, implementing views, stored procedures, and functions, and managing transactions and locks. Exam 70-465: Designing Database Solutions for Microsoft SQL Server 2012 This exam is intended for database professionals who design and build database solutions in an organization.  They are responsible for the creation of plans and designs for database structure, storage, objects, and servers. Exam 70-466: Implementing Data Models and Reports with Microsoft SQL Server 2012 The primary audience for this exam is BI Developers.  They are most likely to focus on hands-on work creating the BI solution including implementing multi-dimensional data models, implementing and maintaining OLAP cubes, and creating information displays used in business decision making Exam 70-467: Designing Business Intelligence Solutions with Microsoft SQL Server 2012 The primary audience for this exam is the BI Architect.  BI Architects are responsible for the overall design of the BI infrastructure, including how it relates to other data systems in use. Looking at Rajesh’s passion, I am motivated too! I may want to start attempting the exams in near future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Site Studio Mobile Example - WCM Reuse

    - by john.brunswick
    Mobile internet usage is growing by leaps and bounds and it is theorized that in the not-to-distant future it will eclipse traditional access via desktop browsers. Mary Meeker, a managing director at Morgan Stanley and head of their global technology research team, recently predicted that mobile usage will eclipse desktop usage within the next 5 years in an Events@Google series presentation. In order for organizations to reach their prospects, customers and business partners, they will need to make their content readily available on mobile devices. A few years ago it was fairly challenging to provide a special, separate, site to cater to mobile users using technologies like WML (Wireless Markup Language). Modern mobile browsers have rendered the need for this as irrelevant and now the focus has moved toward providing a browsing experience that works well on small screen sizes and is highly performant. What does all of this mean for Oracle UCM? Taking site content from an existing Site Studio site and targeting it for consumption for mobile devices is a very straightforward process that is aided by a number of native capabilities in the product. The example highlighted in this post takes advantage of dynamic conversion capabilities in Oracle UCM to enable site content to be created and updated via MS Office documents. These documents are then converted to a simple, clean HTML format for consumption in the desktop and mobile browsing experiences. To help better understand how this is possible the example below shows a fictional .COM and its mobile site counterpart that both leverage the same underlying content. The scenario is not complete or production ready, but highlights that a mobile experience may be best delivered by omitting portions of a site that would be present within the version served to desktop clients. If you have browsed CNet (news.com) on a mobile device it becomes quickly apparent that they are serving an optimized version for your mobile device. An iPhone style version can be accessed at http://iphone.cnet.com/. In order to do that they leveraged some work done for the iPhone iUi project developed by Joe Hewitt that provides mobile browsers an experience that is similar to what users may find in a native iPhone application. For our example parts of this framework are used (the CSS) and this approach provides a page that will degrade nicely over a wide range of mobile browsers, since it is comprised of lightweight HTML markup and CSS. The iPhone iUi framework also provides some nice JavaScript to enable animated transitions between pages, but for the widest range of mobile browser compatibility we will only incorporate the CSS and HTML DIV / UL based page markup in our example.

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >