Search Results

Search found 1034 results on 42 pages for 'jun chen'.

Page 37/42 | < Previous Page | 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Basic Auth on DirectoryIndex Only

    - by Brad
    I am trying to configure basic auth for my index file, and only my index file. I have configured it like so: <Files index.htm> Order allow,deny Allow from all AuthType Basic AuthName "Some Auth" AuthUserFile "C:/path/to/my/.htpasswd" Require valid-user </Files> When I visit the page, 401 Authorization Required is returned as expected, but the browser doesn't prompt for the username/password. Some further inspection has revealed that Apache is not sending the WWW-Authenticate header. GET http://myhost/ HTTP/1.1 Host: myhost Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.100 Safari/534.30 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 401 Authorization Required Date: Tue, 21 Jun 2011 21:36:48 GMT Server: Apache/2.2.16 (Win32) Content-Length: 401 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>401 Authorization Required</title> </head><body> <h1>Authorization Required</h1> <p>This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.</p> </body></html> Why is Apache doing this? How can I configure it to send that header appropriately? It is worth noting that this exact same set of directives work fine if I set them for a whole directory. It is only when I configure them to a directory index that they do not work. This is how I know my .htpasswd and such are fine. I am using Apache 2.2 on Windows. On another note, I found this listed as a bug in Apache 1.3. This leads me to believe that this is actually a configuration problem on my end.

    Read the article

  • Data loss through permissions change?

    - by charliehorse55
    I seem to have deleted some files on my media drive, simply by changing the permissions. The Story I have many operating systems installed on my computer, and constantly switch between them. I bought a 1TB HD and formatted it as HFS+ (not journaled). It worked well between OSX and all of my linux installations while having much better metadata support than NTFS. I never synced the UIDs for my operating systems so the permissions were always doing funny things. Yesterday I tried to fix the permissions by first changing the UIDs of the other operating systems to match OSX, and then changing the file ownership of all files on the drive to match OSX. About 50% of the files on the drive were originally owned by OSX, the other half were owned by the various linux installations. I started to try and change the file permissions for the folders, and that's when it went south. The Commands These commands were run recursively on the one section of the drive. sudo chflags nouchg sudo chflags -N sudo chown myusername sudo chmod 666 sudo chgrp staff The Bad Sometime during the execution of these commands, all of the files belonging to OSX were deleted. If a folder had linux based files it would remain intact but any folder containing exclusively OSX files was erased. If a folder containing linux files also contained a subfolder with only OSX files, the sub folder would remain but is inaccesible and displays a file size of 0 bytes. Luckily these commands were only run on the videos folder, I also have a music folder with the same issue but I did not execute any of these commands on it. Effectively I have examples of the file permissions for all 3 states - the linux files before and after, and the OSX files before. OSX File Before -rw-r--r--@ 1 charliehorse 1000 3634241 15 Nov 2008 /path/to/file com.apple.FinderInfo 32 Linux File before: -rw-r--r--@ 1 charliehorse 1000 5321776 20 Sep 2002 /path/to/file/ com.apple.FinderInfo 32 Linux File After (Read only): (Different file, but I believe the same permissions originally) -rw-rw-rw-@ 1 charliehorse staff 366982610 17 Jun 2008 /path/to/file com.apple.FinderInfo 32 These files still exist so if there are any other commands to run on them to determine what has happened here, I can do that. EDIT Running ls on one of the "empty" deleted OSX folders yields this: ls: .: Permission denied ls: ..: Permission denied ls: subdirA: Permission denied ls: subdirB: Permission denied ls: subdirC: Permission denied ls: subdirD: Permission denied I believe my files might still be there, but the permissions are screwed.

    Read the article

  • Can't install new database in OpenLDAP 2.4 with BDB on Debian

    - by Timothy High
    I'm trying to install an openldap server (slapd) on a Debian EC2 instance. I have followed all the instructions I can find, and am using the recommended slapd-config approach to configuration. It all seems to be just fine, except that for some reason it can't create my new database. ldap.conf.bak (renamed to ensure it's not being used): ########## # Basics # ########## include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema pidfile /var/run/slapd/slapd.pid argsfile /var/run/slapd/slapd.args loglevel none modulepath /usr/lib/ldap # modulepath /usr/local/libexec/openldap moduleload back_bdb.la database config #rootdn "cn=admin,cn=config" rootpw secret database bdb suffix "dc=example,dc=com" rootdn "cn=manager,dc=example,dc=com" rootpw secret directory /usr/local/var/openldap-data ######## # ACLs # ######## access to attrs=userPassword by anonymous auth by self write by * none access to * by self write by * none When I run slaptest on it, it complains that it couldn't find the id2entry.bdb file: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d bdb_db_open: database "dc=example,dc=com": db_open(/usr/local/var/openldap-data/id2entry.bdb) failed: No such file or directory (2). backend_startup_one (type=bdb, suffix="dc=example,dc=com"): bi_db_open failed! (2) slap_startup failed (test would succeed using the -u switch) Using the -u switch it works, of course. But that merely creates the configuration. It doesn't resolve the underlying problem: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d -u config file testing succeeded Looking in the database directory, the basic files are there (with right ownership, after a manual chown), but the dbd file wasn't created: root@server:/etc/ldap# ls -al /usr/local/var/openldap-data total 4328 drwxr-sr-x 2 openldap openldap 4096 Mar 1 15:23 . drwxr-sr-x 4 root staff 4096 Mar 1 13:50 .. -rw-r--r-- 1 openldap openldap 3080 Mar 1 14:35 DB_CONFIG -rw------- 1 openldap openldap 24576 Mar 1 15:23 __db.001 -rw------- 1 openldap openldap 843776 Mar 1 15:23 __db.002 -rw------- 1 openldap openldap 2629632 Mar 1 15:23 __db.003 -rw------- 1 openldap openldap 655360 Mar 1 14:35 __db.004 -rw------- 1 openldap openldap 4431872 Mar 1 15:23 __db.005 -rw------- 1 openldap openldap 32768 Mar 1 15:23 __db.006 -rw-r--r-- 1 openldap openldap 2048 Mar 1 15:23 alock (note that, because I'm doing this as root, I had to also change ownership of some of the files created by slaptest) Finally, I can start the slapd service, but it dies in the attempt (text from syslog): Mar 1 15:06:23 server slapd[21160]: @(#) $OpenLDAP: slapd 2.4.23 (Jun 15 2011 13:31:57) $#012#011@incagijs:/home/thijs/debian/p-u/openldap-2.4.23/debian/build/servers/slapd Mar 1 15:06:23 server slapd[21160]: config error processing olcDatabase={1}bdb,cn=config: Mar 1 15:06:23 server slapd[21160]: slapd stopped. Mar 1 15:06:23 server slapd[21160]: connections_destroy: nothing to destroy. I manually checked the olcDatabase={1}bdb file, and it looks fine to my amateur eye. All my specific configs are there. Unfortunately, syslog isn't reporting a specific error in this case (if it were a file permission error, it would say). I've tried uninstalling and reinstalling slapd, changing permissions, Googling my wits out, but I'm tapped out. Any OpenLDAP genius out there would be greatly appreciated!

    Read the article

  • Change default DNS server in Arch Linux

    - by AntoineG
    I'm in Viet Nam and most social websites (Facebook, Twitter and the likes - even reddit) are blocked by the ISP DNS server. I tried to change the DNS server of my Arch box using the resolv.conf file, but it failed miserably since dhcpd generates this file automatically everytime I connect to the LAN. I've been looking around to try and find out how to fix this, without success. Either I s*ck at Googling, either it is non-trivial to do so. EDIT 1: Meh, apparently posting it here made me feel guilty and I had to push my search a bit more. I found the same article than Ankur post below. This is what I made, if anybody ever faces the same problem: $ sudo gvim /etc/dhcpcd.conf Add "nohook resolv.conf" at the tail of the file. $ sudo gvim /etc/resolv.conf Add to the file (OpenDNS servers): nameserver 208.67.222.222 nameserver 208.67.220.220 Or (Google DNS): nameserver 8.8.8.8 nameserver 8.8.4.4 Then, verify it worked (need package dnsutils): $ dig www.facebook.com ; <<>> DiG 9.9.1-P1 <<>> www.facebook.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16994 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.facebook.com. IN A ;; ANSWER SECTION: www.facebook.com. 89 IN A 69.171.224.53 ;; Query time: 87 msec ;; SERVER: 208.67.222.222#53(208.67.222.222) ;; WHEN: Thu Jun 28 00:43:23 2012 ;; MSG SIZE rcvd: 61 See ;; SERVER: 208.67.222.222#53(208.67.222.222), it worked.

    Read the article

  • SSH: Connection Reset by Peer

    - by hopeless
    I have a Solaris 10 server on another network. I can ping it and telnet to it, but ssh doesn't connect. PuTTY log contains nothing of interest (they both negotiate to ssh v2) and then I get "Event Log: Network error: Software caused connection abort". ssh is defintely running: svcs -a | grep ssh online 12:12:04 svc:/network/ssh:default Here's an extract from the server's /var/adm/messages (anonymised) Jun 8 19:51:05 ******* sshd[26391]: [ID 800047 auth.crit] fatal: Read from socket failed: Connection reset by peer However, if I telnet to the box, I can login to ssh locally. I can also ssh to other (non-Solaris) machines on that network fine so I don't believe that it's a network issue (though, since I'm a few hundred miles away, I can't be sure). The server's firewall is disabled, so that shouldn't be a problem root@******** # svcs -a | grep -i ipf disabled Apr_27 svc:/network/ipfilter:default Any ideas what I should start checking? Update: Based on the feedback below, I've run sshd in debug mode. Here's the client output: $ ssh -vvv root@machine -p 32222 OpenSSH_5.0p1, OpenSSL 0.9.8h 28 May 2008 debug2: ssh_connect: needpriv 0 debug1: Connecting to machine [X.X.X.X] port 32222. debug1: Connection established. debug1: identity file /home/lawrencj/.ssh/identity type -1 debug1: identity file /home/lawrencj/.ssh/id_rsa type -1 debug1: identity file /home/lawrencj/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version Sun_SSH_1.1 debug1: no match: Sun_SSH_1.1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.0 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent Read from socket failed: Connection reset by peer And here's the server output: root@machine # /usr/lib/ssh/sshd -d -p 32222 debug1: sshd version Sun_SSH_1.1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: Bind to port 32222 on ::. Server listening on :: port 32222. debug1: Bind to port 32222 on 0.0.0.0. Server listening on 0.0.0.0 port 32222. debug1: Server will not fork when running in debugging mode. Connection from 1.2.3.4 port 2652 debug1: Client protocol version 2.0; client software version OpenSSH_5.0 debug1: match: OpenSSH_5.0 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-Sun_SSH_1.1 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: Failed to acquire GSS-API credentials for any mechanisms (No credentials were supplied, or the credentials were unavailable or inaccessible Unknown code 0 ) debug1: SSH2_MSG_KEXINIT sent Read from socket failed: Connection reset by peer debug1: Calling cleanup 0x4584c(0x0) This line seems a likely candidate: debug1: Failed to acquire GSS-API credentials for any mechanisms (No credentials were supplied, or the credentials were unavailable or inaccessible

    Read the article

  • Microsoft signed driver appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OurCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • OpenLDAP with StartTLS broken on Debian Lenny

    - by mr.zog
    I'm trying to get OpenLDAP on Lenny to work with StartTLS. I have a Fedora 13 machine which I'm using as a client for testing. So far the Fedora client is ignoring the 'host' directive in /etc/ldap.conf when I try to connect using ldapsearch. The client wants to connect to 127.0.0.1:389 even if I specify -H ldaps://server.name on when using ldapsearch. /etc/ldap.conf on the client machine is in mode 444. But even when I try connecting locally from an ssh session, I see errors like this: ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1) Someone hit me with a cluebat, plz. Update: you must use ~/.ldaprc for settings such as 'host'. Also, I just used nmap against the ldap server and it showed 636 and 389 in an open state. Here's what prints to screen when I try to connect with, ldapsearch -ZZ –x '(objectclass=*)'+ -d -1 ldap_create ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 192.168.10.41:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 192.168.10.41:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_dump: buf=0x9bdbdb8 ptr=0x9bdbdb8 end=0x9bdbdd7 len=31 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ber_scanf fmt ({) ber: ber_dump: buf=0x9bdbdb8 ptr=0x9bdbdbd end=0x9bdbdd7 len=26 0000: 77 18 80 16 31 2e 33 2e 36 2e 31 2e 34 2e 31 2e w...1.3.6.1.4.1. 0010: 31 34 36 36 2e 32 30 30 33 37 1466.20037 ber_flush2: 31 bytes to sd 3 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ldap_write: want=31, written=31 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ldap_result ld 0x9bd3050 msgid 1 wait4msg ld 0x9bd3050 msgid 1 (infinite timeout) wait4msg continue ld 0x9bd3050 msgid 1 all 1 ** ld 0x9bd3050 Connections: * host: 192.168.10.41 port: 636 (default) refcnt: 2 status: Connected last used: Sun Jun 6 12:54:05 2010 ** ld 0x9bd3050 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x9bd3050 request count 1 (abandoned 0) ** ld 0x9bd3050 Response Queue: Empty ld 0x9bd3050 response count 0 ldap_chkResponseList ld 0x9bd3050 msgid 1 all 1 ldap_chkResponseList returns ld 0x9bd3050 NULL ldap_int_select read1msg: ld 0x9bd3050 msgid 1 all 1 ber_get_next ldap_read: want=8, got=0 ber_get_next failed. ldap_err2string ldap_start_tls: Can't contact LDAP server (-1)

    Read the article

  • Microsoft signed drivers appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OutCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • nginx probably deliering wrong filetype for .css file with php tags

    - by Katai
    And again - NGINX is giving me many Questions today :) Like always, I already tried around for a while, but cant seem to fix this issue: I just configured NGINX to handle my .css files equal to my .php files (to parse PHP tags inside the CSS file). This works perfectly, and the file is found and delivered. I could debug it with FIrebug, and everything is OK (it displays the contents of the .css inside the opened <link> tag). So, everything working, right? Wrong. It has the CSS, but it does not interpret it! What I mean by this: apparently, the file-type of the CSS (or aplication-type, whatever) is wrong. The Page can access the CSS, but doesnt bother at all to actually use it. What I checked / tried: There are no PHP errors inside of the .css, so that one is out The .css is accessible. I can call the URI manually, or check if the included URL finds it: both works The .css has no syntax errors (i switched to a css that just has body {background-color: #000; } It works whitout NGINX I deleted the browser cache / restarted NGINX after config rewrites Here the configuration: server { listen 80; server_name localhost; access_log /var/log/nginx/board.access_log; error_log /var/log/nginx/board.error_log warn; root /var/www/board/public; index index.php; fastcgi_index index.php; location / { try_files $uri $uri /index.php; } location ~ (\.php|\.css)$ { try_files $uri =404; include /etc/nginx/fastcgi_params; #keepalive_timeout 0; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:7777; } } Firebug 'Network' Response Header: Connection keep-alive Content-Encoding gzip Content-Type text/html Date Sat, 16 Jun 2012 10:08:40 GMT Server nginx/1.0.5 Transfer-Encoding chunked X-Powered-By PHP/5.3.6-13ubuntu3.7 I think I just answered my own question. Is the Content-Type text/html the problem? How can I remove that? My personal guess is that I have to use this in some way include /etc/nginx/mime.types; default_type application/octet-stream; But I'm not sure... anyone an idea how to solve this? TLDR; CSS file is delivered correctly, but it doesnt seem to be 'used' as CSS from the browser. (Tested, works on apache)

    Read the article

  • Volume group disappeared, LVs still available

    - by Ben
    I've run into an issue with my KVM host which runs VMs on a LVM volume. As of last night the logical volumes are no longer seen as such (I can't create snapshots of them even though I have been for months now). Running any scans all result in nothing being found: [root@apollo ~]# pvscan No matching physical volumes found [root@apollo ~]# vgscan Reading all physical volumes. This may take a while... No volume groups found root@apollo ~]# lvscan No volume groups found If I try restoring the VG conf backup from /etc/lvm/backups/vg0 I get the following error: [root@apollo ~]# vgcfgrestore -f /etc/lvm/backup/vg0 vg0 Couldn't find device with uuid 20zG25-H8MU-UQPf-u0hD-NftW-ngsC-mG63dt. Cannot restore Volume Group vg0 with 1 PVs marked as missing. Restore failed. /etc/lvm/backups/vg0 has the following for the physical volume: physical_volumes { pv0 { id = "20zG25-H8MU-UQPf-u0hD-NftW-ngsC-mG63dt" device = "/dev/sda5" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 4292870143 # 1.99902 Terabytes pe_start = 384 pe_count = 524031 # 1.99902 Terabytes } } fdisk -l /dev/sda shows the following: [root@apollo ~]# fdisk -l /dev/sda Disk /dev/sda: 6000.1 GB, 6000069312512 bytes 64 heads, 32 sectors/track, 5722112 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000188b7 Device Boot Start End Blocks Id System /dev/sda1 2 32768 33553408 82 Linux swap / Solaris /dev/sda2 32769 33280 524288 83 Linux /dev/sda3 33281 1081856 1073741824 83 Linux /dev/sda4 1081857 3177984 2146435072 85 Linux extended /dev/sda5 1081857 3177984 2146435071+ 8e Linux LVM The server is running a 4 disk HW RAID10 which seems perfectly healthy according to megacli and smartd. The only odd message in /var/log/messages is the following which shows up every couple of hours: Jun 10 09:41:57 apollo udevd[527]: failed to create queue file: No space left on device Output of df -h [root@apollo ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 1016G 119G 847G 13% / /dev/sda2 508M 67M 416M 14% /boot Does anyone have any ideas what to do next? The VMs are all running fine at the moment apart from not being able to snapshot them. Updated with extra info It's not a lack of inodes: [root@apollo ~]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 67108864 48066 67060798 1% / /dev/sda2 32768 47 32721 1% /boot pvs, vgs & lvs either output nothing or "No volume groups found".

    Read the article

  • Hot-swap drive got new name, can I change it on-the-fly?

    - by T.J. Crowder
    One of the HDDs in my server's RAID config failed, so I took it out of the array and had the data center hot-swap it. They've done that, but now the new drive is /dev/sdc rather than /dev/sda. I suspect — correct me if I'm wrong — that if I reboot the server, it will be /dev/sda again, so I'm hesitant to add it back to the array as /dev/sdc because I don't want to lay a trap for myself to fall into on the next reboot. I'd just as soon not reboot the server if I don't need to (if I do need to, well, too bad for me). Is there a way I can change the device name from /dev/sdc to /dev/sda without rebooting? This is on Ubuntu 10.04 LTS. It's an md array ("Linux Software RAID"), where currently one of the devices (there are a couple of them) looks like this ("degraded" because I've removed the old /dev/sda from it): # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Oct 11 21:07:54 2009 Raid Level : raid1 Array Size : 97536 (95.27 MiB 99.88 MB) Used Dev Size : 97536 (95.27 MiB 99.88 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 30 09:31:16 2011 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 496be7a5:ab9177ed:7792c71e:7dc17aa4 Events : 0.112 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed Thanks, Update: Reading through the kernel md documentation, I suspect that if the name changes on reboot, it won't matter. (Good design, that.) Here's why: Boot time autodetection of RAID arrays When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time. The kernel parameter "raid=partitionable" (or "raid=part") means that all auto-detected arrays are assembled as partitionable. I do have md compiled into the kernel, so I'm rebuilding the array now and will do the reboot to see what happens. Even if it works, the above doesn't answer the question I actually asked, so unless someone comes along and answers that question in the meantime (I'd be interested, even if it's not necessary for what I'm doing this very moment), I'll just delete the question to keep noise down.

    Read the article

  • Nvidia Drivers on Debian / Lenny (Stable) -> Installation successful -> Monitors gets black

    - by David
    I have successfully installed the proprietary drivers for my nvidia (geforce 7300 gt) graphics card on debian/lenny. I know its not the best way to chose for driver installation ( see this link: http://wiki.debian.org/NvidiaGraphicsDrivers#non-freedrivers ). but the two ways seem to be possible for me (nvidia-kernel module compilation). Now the problem is that the monitors gets black, the power light starts blinking after i launch the x-server. Have a short look a the logs (output truncated from /var/log/Xorg.0.log): (II) Setting vga for screen 0. (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) Jul 28 17:10:11 NVIDIA(0): Enabling RENDER acceleration (II) Jul 28 17:10:11 NVIDIA(0): Support for GLX with the Damage and Composite X extensions is (II) Jul 28 17:10:11 NVIDIA(0): enabled. (II) Jul 28 17:10:11 NVIDIA(0): NVIDIA GPU GeForce 7300 GT (G73) at PCI:1:0:0 (GPU-0) (--) Jul 28 17:10:11 NVIDIA(0): Memory: 262144 kBytes (--) Jul 28 17:10:11 NVIDIA(0): VideoBIOS: 05.73.22.25.00 (II) Jul 28 17:10:11 NVIDIA(0): Detected PCI Express Link width: 16X (--) Jul 28 17:10:11 NVIDIA(0): Interlaced video modes are supported on this GPU (--) Jul 28 17:10:11 NVIDIA(0): Connected display device(s) on GeForce 7300 GT at PCI:1:0:0: (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0) (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (CRT-0): 400.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): 165.0 MHz maximum pixel clock (--) Jul 28 17:10:11 NVIDIA(0): Samsung SyncMaster (DFP-0): Internal Single Link TMDS (II) Jul 28 17:10:11 NVIDIA(0): Assigned Display Device: CRT-0 (==) Jul 28 17:10:11 NVIDIA(0): (==) Jul 28 17:10:11 NVIDIA(0): No modes were requested; the default mode "nvidia-auto-select" (==) Jul 28 17:10:11 NVIDIA(0): will be used as the requested mode. (==) Jul 28 17:10:11 NVIDIA(0): (II) Jul 28 17:10:11 NVIDIA(0): Validated modes: (II) Jul 28 17:10:11 NVIDIA(0): "nvidia-auto-select" (II) Jul 28 17:10:11 NVIDIA(0): Virtual screen size determined to be 1280 x 1024 (--) Jul 28 17:10:11 NVIDIA(0): DPI set to (85, 86); computed from "UseEdidDpi" X config (--) Jul 28 17:10:11 NVIDIA(0): option (==) Jul 28 17:10:11 NVIDIA(0): Enabling 32-bit ARGB GLX visuals. (--) Depth 24 pixmap format is 32 bpp Here is the complete /etc/X11/xorg.conf file as generated by nvidia-xconfig: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 256.35 (buildmeister@builder101) Wed Jun 16 19:25:59 PDT 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" Hor

    Read the article

  • ssh client problem: Connection reset by peer

    - by yonix
    I'm having a really annoying problem on my Ubuntu laptop. I noticed it today, after upgrading to Ubuntu 11.04, although I'm not entirely sure this is the cause as I played with my ssh keys a few days ago. The problem is, whenever I try to ssh to ANY host I get the following error: Read from socket failed: Connection reset by peer running with -vvv gives the following output: OpenSSH_5.8p1 Debian-1ubuntu3, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to hostname [10.0.0.2] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 1.99, remote software version OpenSSH_4.2 debug1: match: OpenSSH_4.2 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-1ubuntu3 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "hostname" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: loaded 0 keys debug1: SSH2_MSG_KEXINIT sent Read from socket failed: Connection reset by peer My /etc/ssh/ssh_config: Host * SendEnv LANG LC_* HashKnownHosts yes GSSAPIAuthentication no GSSAPIDelegateCredentials no I can connect to my laptop from any other server via ssh, and I can also ssh localhost from my laptop successfully. I can connect to all these other server from other laptops, and I don't see anything in the logs of the other servers regarding my failed attempt. I tried to stop iptables, didn't help. I tried several tricks I could find online with my /etc/ssh/ssh_config, but I was unsuccessful in solving the problem... Any ideas? Edit: This is the log from one of the hosts I try to connect to: May 1 19:15:23 localhost sshd[2845]: debug1: Forked child 2847. May 1 19:15:23 localhost sshd[2845]: debug3: send_rexec_state: entering fd = 8 config len 577 May 1 19:15:23 localhost sshd[2845]: debug3: ssh_msg_send: type 0 May 1 19:15:23 localhost sshd[2845]: debug3: send_rexec_state: done May 1 19:15:23 localhost sshd[2847]: debug1: rexec start in 5 out 5 newsock 5 pipe 7 sock 8 May 1 19:15:23 localhost sshd[2847]: debug1: inetd sockets after dupping: 3, 3 May 1 19:15:23 localhost sshd[2847]: Connection from 10.0.0.7 port 55747 May 1 19:15:23 localhost sshd[2847]: debug1: Client protocol version 2.0; client software version OpenSSH_5.8p1 Debian-1ubuntu3 May 1 19:15:23 localhost sshd[2847]: debug1: match: OpenSSH_5.8p1 Debian-1ubuntu3 pat OpenSSH* May 1 19:15:23 localhost sshd[2847]: debug1: Enabling compatibility mode for protocol 2.0 May 1 19:15:23 localhost sshd[2847]: debug1: Local version string SSH-2.0-OpenSSH_5.3 May 1 19:15:23 localhost sshd[2847]: debug2: fd 3 setting O_NONBLOCK May 1 19:15:23 localhost sshd[2847]: debug2: Network child is on pid 2848 May 1 19:15:23 localhost sshd[2847]: debug3: preauth child monitor started May 1 19:15:23 localhost sshd[2847]: debug3: mm_request_receive entering May 1 19:15:23 localhost sshd[2848]: debug3: privsep user:group 74:74 May 1 19:15:23 localhost sshd[2848]: debug1: permanently_set_uid: 74/74 May 1 19:15:23 localhost sshd[2848]: debug1: list_hostkey_types: ssh-rsa,ssh-dss May 1 19:15:23 localhost sshd[2848]: debug1: SSH2_MSG_KEXINIT sent May 1 19:15:23 localhost sshd[2848]: debug3: Wrote 784 bytes for a total of 805 May 1 19:15:23 localhost sshd[2848]: fatal: Read from socket failed: Connection reset by peer

    Read the article

  • arp "who-has tell" on cloned machine

    - by mcmorry
    I have a urgent problem to solve today, but I'm lost. Please help. I've cloned a Virtual Machine hosted on VM Ware ESXi 4.1 The OS is now Ubuntu Server 12.04 LTS, but at the time of cloning it was 10.04 LTS. I fixed the MAC address manually inside /etc/udev/rules.d/70-persistent-net.rules. It is a known problem on Ubuntu. I had to remove the old MAC address and set the new one as eth0. Everything seems to work fine, except ARP. My provider OVH sent me a warning to resolve it today (this is the second day) or they will block my IP! The log contains many lines like this: Tue Jun 5 01:04:29 2012 : arp who-has 178.32.136.212 tell 178.32.136.224 where .224 is the cloned server that is causing problems, and .212 is the cloned one. arp -na returns: ? (178.33.230.254) at 00:07:b4:00:00:02 [ether] on eth0 ? (178.32.136.212) at 00:50:56:09:8e:f1 [ether] on eth0 The first IP is the ESXi machine. The second one should not be there. I'm not an expert and I don't know what else to do to fix this problem. Any help will be very appreciated. Thanks. EDIT: ifcofig on .224: eth0 Link encap:Ethernet HWaddr 00:50:56:01:32:c6 inet addr:178.32.136.224 Bcast:178.32.136.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe01:32c6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:399924 errors:0 dropped:465 overruns:0 frame:0 TX packets:241884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:58006071 (58.0 MB) TX bytes:663603166 (663.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:516216 errors:0 dropped:0 overruns:0 frame:0 TX packets:516216 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:236284275 (236.2 MB) TX bytes:236284275 (236.2 MB) ifconfig on .212: eth0 Link encap:Ethernet HWaddr 00:50:56:09:8e:f1 inet addr:178.32.136.212 Bcast:178.32.136.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe09:8ef1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16014 errors:0 dropped:0 overruns:0 frame:0 TX packets:14511 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15134444 (15.1 MB) TX bytes:2683025 (2.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:9944 errors:0 dropped:0 overruns:0 frame:0 TX packets:9944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1139347 (1.1 MB) TX bytes:1139347 (1.1 MB)

    Read the article

  • Sharing files between multiple sites using only desktop software

    - by perlyking
    Our organisation has three sites; a head office, where the master copies of company files are stored, plus two branch offices using only workstations and a NAS or two. Currently we're talking about <10GB. At the main office, we have no admin access to the file server, as this is entirely controlled by the larger institution where we are located. For the same reason, we have no VPN remote access to this network. Instead, we simply have access to a network share using over a Novell LAN. Question: how can we share files between offices in way that minimises latency, i.e. that gives us a mirror of the main network share at each site? (There is little likelihood of concurrent editing, and we can live with the odd file conflict now and again). Up to now branch office staff have had to use GotoMyPC-type solutions to remotely access files held at the main office. Or email. I was hoping to use Google Drive on a dedicated workstation at each office to sync the contents of the network share (head office) or NAS (branch offices) via the cloud, but at my last attempt (29 Jun '12), the Google Drive installer would not allow me to designate the remote network share as the "target" folder. (I chose Google Drive over Drobbox et al. as we already use GMail for corporate mail) The next idea was to use a designated workstation at head office to mirror the network share to a local drive, then use Google Drive to push that to the cloud. This seems a step too far. Nor do I have any good ideas about how to achieve this network/local mirroring, as we can't, for example, install the rsync daemon on the server. I do not want to use Google Drive locally on each workstation as this will inconvenience users, and more importantly, move files off the backed-up, well-maintained (UPS, RAID etc) network share at head office. Our budget is only in the £100's. Should we perhaps just ditch the head office server and use something like JungleDisk? At least this presents the user with what appears to be a mapped drive.

    Read the article

  • Win32: IProgressDialog will not disappear until you mouse over it.

    - by Ian Boyd
    i'm using the Win32 progress dialog. The damnest thing is that when i call: progressDialog.StopProgressDialog(); it doesn't disappear. It stays on screen until the user moves her mouse over it - then it suddenly disappers. The call to StopProgressDialog returns right away (i.e. it's not a synchronous call). i can prove this by doing things after the call has returned: private void button1_Click(object sender, EventArgs e) { //Force red background to prove we've started this.BackColor = Color.Red; this.Refresh(); //Start a progress dialog IProgressDialog pd = (IProgressDialog)new ProgressDialog(); pd.StartProgressDialog(this.Handle, null, PROGDLG.Normal, IntPtr.Zero); //The long running operation System.Threading.Thread.Sleep(10000); //Stop the progress dialog pd.SetLine(1, "Stopping Progress Dialog", false, IntPtr.Zero); pd.StopProgressDialog(); pd = null; //Return form to normal color to prove we've stopped. this.BackColor = SystemColors.Control; this.Refresh(); } The form: starts gray turns red to show we've stared turns back to gray color to show we've called stop So the call to StopProgressDialog has returned, except the progress dialog is still sitting there, mocking me, showing the message: Stopping Progress Dialog Doesn't Appear for 10 seconds Additionally, the progress dialog does not appear on screen until the System.Threading.Thread.Sleep(10000); ten second sleep is over. Not limited to .NET WinForms The same code also fails in Delphi, which is also an object wrapper around Window's windows: procedure TForm1.Button1Click(Sender: TObject); var pd: IProgressDialog; begin Self.Color := clRed; Self.Repaint; pd := CoProgressDialog.Create; pd.StartProgressDialog(Self.Handle, nil, PROGDLG_NORMAL, nil); Sleep(10000); pd.SetLine(1, StringToOleStr('Stopping Progress Dialog'), False, nil); pd.StopProgressDialog; pd := nil; Self.Color := clBtnFace; Self.Repaint; end; PreserveSig An exception would be thrown if StopProgressDialog was failing. Most of the methods in IProgressDialog, when translated into C# (or into Delphi), use the compiler's automatic mechanism of converting failed COM HRESULTS into a native language exception. In other words the following two signatures will throw an exception if the COM call returned an error HRESULT (i.e. a value less than zero): //C# void StopProgressDialog(); //Delphi procedure StopProgressDialog; safecall; Whereas the following lets you see the HRESULT's and react yourself: //C# [PreserveSig] int StopProgressDialog(); //Delphi function StopProgressDialog: HRESULT; stdcall; HRESULT is a 32-bit value. If the high-bit is set (or the value is negative) it is an error. i am using the former syntax. So if StopProgressDialog is returning an error it will be automatically converted to a language exception. Note: Just for SaG i used the [PreserveSig] syntax, the returned HRESULT is zero; MsgWait? The symptom is similar to what Raymond Chen described once, which has to do with the incorrect use of PeekMessage followed by MsgWaitForMultipleObjects: "Sometimes my program gets stuck and reports one fewer record than it should. I have to jiggle the mouse to get the value to update. After a while longer, it falls two behind, then three..." But that would mean that the failure is in IProgressDialog, since it fails equally well on CLR .NET WinForms and native Win32 code.

    Read the article

  • How do I effectively fake a div's background color using an image in the body element?

    - by janoChen
    I want to get something like the following: The dark grey is the sidebar but I want to apply that color into the body element as an image which repeats itself vertically but at the same time doesn't cover the footer (light gray). (this is the easiest way I found to stretch the color (dark gray) until the bottom.) Part of my CSS: body { color: #888; font-family: Arial, "MS Trebuchet", sans-serif; font-size: 75% } .container { margin: 0 auto; overflow: hidden; padding: 0 15px; width: 960px; } /* header */ #header { background: #444; } /* banner */ #header-top { overflow: hidden; height: 77px; width: 960px; /* ie6 hack */ } #lang { float: right; padding: 50px 0 0 0; } /* work */ #content { background: #EEE; } #content a { border-bottom: 0; } #mainbar { overflow: hidden; margin: 0 10px 0 0; width: 644; float: left; } #sidebar { background: #DDD; color: #777; overflow: hidden; margin: 20px 0 10px 0; padding: 15px; width: 240px; float: right; } #sidebar h3 { color: #888; } #about { margin: 0 0 20px; } /* footer */ #footer { color: #777; background: #DDD; clear: both; } /* contact */ #footer-top { line-height: 160%; overflow: hidden; padding: 30px 0; width: 960px; /* ie6 hack */ } #footer-bottom { font-size: 10px; margin: 15px auto; } Part of my HTML: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7"/> <title>Alex Chen - Web Development, Graphic Design, and Translation</title> <link rel="stylesheet" type="text/css" href="styles/global.css" /> </head> <body id="home"> <div id="header"> <div class="container"> <div id="header-top"> </div> </div><!-- .container --> </div><!-- #header --> <div id="content"> <div class="container"> <div id="mainbar"> </div> <!-- #mainbar--> <div id=sidebar> </div> <!-- #sidebar --> </div><!-- .container --> </div><!-- #content --> <div id="footer"> <div class="container"> <div id="footer-top"> </div><!-- #footer-top --> <div id="footer-bottom"> </div> </div><!-- .container --> </div><!-- #footer --> </body> </html>

    Read the article

  • Oracle HRMS API – Create or Update Employee Salary

    - by PRajkumar
    API --  hr_maintain_proposal_api.cre_or_upd_salary_proposal   Note - Salary Basis is required to be assigned to employee assignment before to run Salary Proposal API Example --   DECLARE     lb_inv_next_sal_date_warning      BOOLEAN;     lb_proposed_salary_warning         BOOLEAN;     lb_approved_warning                       BOOLEAN;     lb_payroll_warning                            BOOLEAN;     ln_pay_proposal_id                           NUMBER;     ln_object_version_number                NUMBER; BEGIN    -- Create or Upadte Employee Salary Proposal    -- ----------------------------------------------------------------     hr_maintain_proposal_api.cre_or_upd_salary_proposal     (    -- Input data elements          -- ------------------------------          p_business_group_id                   => fnd_profile.value('PER_BUSINESS_GROUP_ID'),          p_assignment_id                            => 33561,          p_change_date                                => TO_DATE('13-JUN-2011'),          p_proposed_salary_n                   => 1000,          p_approved                                      => 'Y',          -- Output data elements          -- --------------------------------          p_pay_proposal_id                       => ln_pay_proposal_id,          p_object_version_number           => ln_object_version_number,            p_inv_next_sal_date_warning  => lb_inv_next_sal_date_warning,          p_proposed_salary_warning     => lb_proposed_salary_warning,          p_approved_warning                   => lb_approved_warning,          p_payroll_warning                        => lb_payroll_warning     );    COMMIT; EXCEPTION        WHEN OTHERS THEN                           ROLLBACK;                           dbms_output.put_line(SQLERRM); END; / SHOW ERR;  

    Read the article

  • Communication Between Your PC and Azure VM via Windows Azure Connect

    - by Shaun
    With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.   What’s Windows Azure Connect I would like to quote the definition of the Windows Azure Connect in MSDN With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services. There’s an image available at the MSDN as well that I would like to forward here As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not. With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.   Difference between Windows Azure Connect and AppFabric It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding. Category Windows Azure Connect Windows Azure AppFabric Purpose An IP-sec connection between the local machines and azure roles. An application service running on the cloud. Connectivity IP-sec, Domain-joint Net Tcp, Http, Https Components Windows Azure Connect Driver Service Bus, Access Control, Caching Usage Azure roles connect to local database server Azure roles use local shared files,  folders and printers, etc. Azure roles join the local AD. Expose the local service to Internet. Move the authorization process to the cloud. Integrate with existing identities such as Live ID, Google ID, etc. with existing local services. Utilize the distributed cache.   And also some scenarios on which of them should be used. Scenario Connect AppFabric I have a service deployed in the Intranet and I want the people can use it from the Internet.   Y I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet. Y   I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service. Y   I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated.   Y I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities. Y Y   How to Enable Windows Azure Connect OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page. You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available. In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.   The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect. After that we can see our Windows Azure Connect information in this page.   Add a Local Machine to Azure Connect As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us. This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect. The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.   Add a Windows Azure Role to Azure Connect Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect. The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me. We copied this token and pasted to the box in the Visual Studio tab. Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.   Establish the Connect Group The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy. The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups. In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group. After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page. And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected. The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.   Test the Azure Connect between the Local Machine and the Azure Role Instance Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address. The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint. I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.   When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6 Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here. You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure. Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance. We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.   Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.   Summary In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to propagate http response code from back-end to client

    - by Manoj Neelapu
    Oracle service bus can be used as for pass through casses. Some use cases require propagating the http-response code back to the caller. http://forums.oracle.com/forums/thread.jspa?messageID=4326052&#4326052 is one such example we will try to accomplish in this tutorial.We will try to demonstrate this feature using Oracle Service Bus (11.1.1.3.0. We will also use commons-logging-1.1.1, httpcomponents-client-4.0.1, httpcomponents-core-4.0.1 for writing the client to demonstrate.First we create a simple JSP which will always set response code to 304.The JSP snippet will look like <%@ page language="java"     contentType="text/xml;     charset=UTF-8"        pageEncoding="UTF-8" %><%      System.out.println("Servlet setting Responsecode=304");    response.setStatus(304);    response.flushBuffer();%>We will now deploy this JSP on weblogic server with URI=http://localhost:7021/reponsecode/For this JSP we will create a simple Any XML BS We will also create proxy service as shown below Once the proxy is created we configure pipeline for the proxy to use route node, which invokes the BS(JSPCaller) created in the first place. So now we will create a error handler for route node and will add a stage. When a HTTP BS sends a request, the JSP sends the response back. If the response code is not 200, then the http BS will consider that as error and the above configured error handler is invoked. We will print $outbound to show the response code sent by the JSP. The next actions. To test this I had create a simple clientimport org.apache.http.Header;import org.apache.http.HttpEntity;import org.apache.http.HttpHost;import org.apache.http.HttpResponse;import org.apache.http.HttpVersion;import org.apache.http.client.methods.HttpGet;import org.apache.http.conn.ClientConnectionManager;import org.apache.http.conn.scheme.PlainSocketFactory;import org.apache.http.conn.scheme.Scheme;import org.apache.http.conn.scheme.SchemeRegistry;import org.apache.http.impl.client.DefaultHttpClient;import org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager;import org.apache.http.params.BasicHttpParams;import org.apache.http.params.HttpParams;import org.apache.http.params.HttpProtocolParams;import org.apache.http.util.EntityUtils;/** * @author MNEELAPU * */public class TestProxy304{    public static void main(String arg[]) throws Exception{     HttpHost target = new HttpHost("localhost", 7021, "http");     // general setup     SchemeRegistry supportedSchemes = new SchemeRegistry();     // Register the "http" protocol scheme, it is required     // by the default operator to look up socket factories.     supportedSchemes.register(new Scheme("http",              PlainSocketFactory.getSocketFactory(), 7021));     // prepare parameters     HttpParams params = new BasicHttpParams();     HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);     HttpProtocolParams.setContentCharset(params, "UTF-8");     HttpProtocolParams.setUseExpectContinue(params, true);     ClientConnectionManager connMgr = new ThreadSafeClientConnManager(params,              supportedSchemes);     DefaultHttpClient httpclient = new DefaultHttpClient(connMgr, params);     HttpGet req = new HttpGet("/HttpResponseCode/ProxyExposed");     System.out.println("executing request to " + target);     HttpResponse rsp = httpclient.execute(target, req);     HttpEntity entity = rsp.getEntity();     System.out.println("----------------------------------------");     System.out.println(rsp.getStatusLine());     Header[] headers = rsp.getAllHeaders();     for (int i = 0; i < headers.length; i++) {         System.out.println(headers[i]);     }     System.out.println("----------------------------------------");     if (entity != null) {         System.out.println(EntityUtils.toString(entity));     }     // When HttpClient instance is no longer needed,      // shut down the connection manager to ensure     // immediate deallocation of all system resources     httpclient.getConnectionManager().shutdown();     }}On compiling and executing this we see the below output in STDOUT which clearly indicates the response code was propagated from Business Service to Proxy serviceexecuting request to http://localhost:7021----------------------------------------HTTP/1.1 304 Not ModifiedDate: Tue, 08 Jun 2010 16:13:42 GMTContent-Type: text/xml; charset=UTF-8X-Powered-By: Servlet/2.5 JSP/2.1----------------------------------------  

    Read the article

  • CodePlex Daily Summary for Sunday, November 27, 2011

    CodePlex Daily Summary for Sunday, November 27, 2011Popular ReleasesTerminals: Version 2 - Beta 4 Release: Beta 4 Refresh Build Dont forget to backup your config files BEFORE upgrading! As usual, please take time to use and abuse this release. We left logging in place, and this is a debug build so be sure to submit your logs on each bug reported, and please do report all bugs! Updated the About form to include the date and time of the build. Useful for CI builds to ensure we have the correct version "Favourites" and "History" save their expanded states after app restarts Code cleanup, secu...MiniTwitter: 1.76: MiniTwitter 1.76 ???? ?? ?????????? User Streams ???????????? User Streams ???????????、??????????????? REST ?????????? ?????????????????????????????? ??????????????????????????????Media Companion: MC 3.424b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movie Show Resolutions... Resolved issue when reverting multiselection of movies to "-none-" Added movie rename support for subtitle files '.srt' & '.sub' Finalised code for '-1' fix - radiobutton to choose either filename or title Fixed issue with Movie Batch Wizard Fanart - ...Advanced Windows Phone Enginering Tool: WPE Downloads: This version of WPE gives you basic updating, restoring, and, erasing for your Windows Phone device.Anno 2070 Assistant: Beta v1.0 (STABLE): Anno 2070 Assistant Beta v1.0 Released! Features Included: Complete Building Layouts for Ecos, Tycoons & Techs Complete Production Chains for Ecos, Tycoons & Techs Completed Credits Screen Known Issues: Not all production chains and building layouts may be on the lists because they have not yet been discovered. However, data is still 99.9% complete. Currently the Supply & Demand, including Calculator screen are disabled until version 1.1.Minemapper: Minemapper v0.1.7: Including updated Minecraft Biome Extractor and mcmap to support the new Minecraft 1.0.0 release (new block types, etc).Visual Leak Detector for Visual C++ 2008/2010: v2.2.1: Enhancements: * strdup and _wcsdup functions support added. * Preliminary support for VS 11 added. Bugs Fixed: * Low performance after upgrading from VLD v2.1. * Memory leaks with static linking fixed (disabled calloc support). * Runtime error R6002 fixed because of wrong memory dump format. * version.h fixed in installer. * Some PVS studio warning fixed.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.10: 3.6.0.10 22-Nov-2011 Update: Removed PreEmptive Platform integration (PreEmptive analytics) Removed all PreEmptive attributes Removed PreEmptive.dll assembly references from all projects Added first support to ADAM/AD LDS Thanks to PatBea. Work Item 9775: http://netsqlazman.codeplex.com/workitem/9775VideoLan DotNet for WinForm, WPF & Silverlight 5: VideoLan DotNet for WinForm, WPF, SL5 - 2011.11.22: The new version contains Silverlight 5 library: Vlc.DotNet.Silverlight. A sample could be tested here The new version add and correct many features : Correction : Reinitialize some variables Deprecate : Logging API, since VLC 1.2 (08/20/2011) Add subitem in LocationMedia (for Youtube videos, ...) Update Wpf sample to use Youtube videos Many others correctionsEZ-NFC: Alpha 1: THIS IS AN ALPHA RELEASE. STILL UNSTABLE AND SUBJECT TO ARCHITECTURE CHANGE What is implemented (In alpha) : ACR122L Device Mifare 1K tag Windows frontend#liveDB: liveDB 0.3.2: New featuresNew abstract storage scheme enabling future cloud support New file system structure and naming scheme for snapshots and journal files based on sequence numbers Journal files are never deleted Automatic snapshots during load or shutdown Renamed/added hooks to Model JournalRestored, SnapshotRestored Created an extensible logging facade Journal gets split into 1MB segments (configurable) Integrity checks before during load/create Commands are cloned by default before ...ReactiveMVVM: ReactiveMVVM v1.0: Example 1 property change: public class Example1 : ViewModelBase{ string _Userid; /// <summary> /// person infomation of owner. /// </summary> public string Userid { get { return _Userid; } set { this.RaiseAndSetIfChanged(x => x.Userid, ref _Userid, value, *true*); } // true, broadcast property change message. } //if the property changed to do...... this.ObservableProperty(x => x.Useid) ...IoCWrap: Initial: Initial release of the source code.Code for Rapid C# Windows Development eBook + LINQPad and Data Tools: LinqPad Custom Visualizer Version 1.0: First release of my LinqPad Custom Visualizer. It is compiled against the Any-CPU build of LINQPad v4.36.6 so it can only be used with the LINQPad Beta: v4.36.x. To install unzip to the LinqPad plugins folder.Distributed replay GUI: Distributed Replay Snapin: This is the dll for registering the snapin in mmc.FaST-LMM: FActored Spectrally Transformed Linear Mixed Models: FaSTLMM v1.03 Binaries for Windows and Linux: These files contain the files necessary to run FaSTLMM on Windows or Linux along with the license and users manual. To download FaSTLMM source code, please follow the changeset link located above to the Source Code tab. The FaSTLMM.Win.zip download contains both C++ and CSharp executable versions of FaSTLMM. No installer is required, just UnZip the file into a directory and run from there. Or put the installation directory on your path and run it from anywhere. The C++ version included r...SharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...SharePoint 2010 Education Demo Project: Release SharePoint SP1 for Education Solutions: This release includes updates to the Content Packs for SharePoint SP1. All Content Packs have been updated to install successfully under SharePoint SP1SQL Monitor - managing sql server performance: SQLMon 4.1 alpha 6: 1. improved support for schema 2. added find reference when right click on object list 3. added object rename supportBugNET Issue Tracker: BugNET 0.9.126: First stable release of version 0.9. Upgrades from 0.8 are fully supported and upgrades to future releases will also be supported. This release is now compiled against the .NET 4.0 framework and is a requirement. Because of this the web.config has significantly changed. After upgrading, you will need to configure the authentication settings for user registration and anonymous access again. Please see our installation / upgrade instructions for more details: http://wiki.bugnetproject.c...New Projectsandrewtatham.robocode: Andrew Tatham's Robocode botsClear SharePoint Lists: This project contains the tools used to clear the items from the one or more Lists.Clipboard Editor: How many times have you pasted something in Notepad and then copied the plain text again? We do it all the time to strip formatting from the clipboard. This utility lets you pick which format from the clipboard to keep.CS New Rus: ?? ????? ??????????? ??????. ??? ??? - CS New. ?? ???? ????? ?? ????? ??????????? ??? ? ?????????? ? ???????. ElfDoc: ElfDoc enables you to create word documents from templates, using open xml.HTC RUU .NET: HTC's legendary RUU goes .NET and Open Source.................. You can browse for .nbh file, not locked at current directory and, you can update your device's rom in .NET wayMobileGamePrototype: For now just a skeleton of the architecture.NopCommerce 23 Multi Store Support: NopCommerce 23 Multi Store Support novel: fetch novelOrchard Custom Shapes: Ready to use custom orchard shapes like a table shape.Philosophy Gadget: This gadget helps people associate known works of philosophy with their known authors.ReefTracker: A controller agnostic logging and reporting application for reef aquarium controllers. SQLQuery: SQL QueryWindows Phone Marketplace Viewer: Windows Phone Marketplace Viewer is a single aspx page for asp.net 3+ with no additional dependencies. It will show the top 2000 apps in one of the 3 categories: paid and free together, only paid or only free, for all the marketplace languages.

    Read the article

  • Performing a clean database creation using msbuild

    - by Robert May
    So I’m taking a break from writing about other Agile stuff for a post. :)  I’m still going to get back to the other subjects, but this is fun too. Something I’ve done quite a bit of is MSBuild and CI work.  I’m experimenting with ways to improve what I’ve done in the past, particularly around database CI. Today, I developed a mechanism for starting from scratch with your database.  By scratch, I mean blowing away the existing database and creating it again from a single command line call.  I’m a firm believer that developers should be able to get to a known clean state at the database level with a single command and that they should be operating off of their own isolated database to improve productivity.  These scripts will help that. Here’s how I did it.  First, we have to disconnect users.  I did so using the help of a script from sql server central.  Note that I’m using sqlcmd variable replacement. -- kills all the users in a particular database -- dlhatheway/3M, 11-Jun-2000 declare @arg_dbname sysname declare @a_spid smallint declare @msg varchar(255) declare @a_dbid int set @arg_dbname = '$(DatabaseName)' select @a_dbid = sdb.dbid from master..sysdatabases sdb where sdb.name = @arg_dbname declare db_users insensitive cursor for select sp.spid from master..sysprocesses sp where sp.dbid = @a_dbid open db_users fetch next from db_users into @a_spid while @@fetch_status = 0 begin select @msg = 'kill '+convert(char(5),@a_spid) print @msg execute (@msg) fetch next from db_users into @a_spid end close db_users deallocate db_users GO Once all users are booted from the database, we can commence with recreating the database.  I generated the script that is used to create a database from SQL Server management studio, so I’m only going to show the bits that weren’t generated that are important.  There are a bunch of Alter Database statements that aren’t shown. First, I had to find the default location of the database files in the install, since they can be in many different locations.  I used Method 1 from a technet blog and then modified it a bit to do what I needed to do.  I ended up using dynamic SQL because for the life of me, I couldn’t get the “Filename” property to not return an error when I used anything besides a string.  I’m dropping the database first, if it exists.  Here’s the code:   IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = N'$(DatabaseName)') BEGIN drop database $(DatabaseName) END; go IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; -- Create temp database. Because no options are given, the default data and --- log path locations are used CREATE DATABASE zzTempDBForDefaultPath; DECLARE @Default_Data_Path VARCHAR(512), @Default_Log_Path VARCHAR(512); --Get the default data path SELECT @Default_Data_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 0); --Get the default Log path SELECT @Default_Log_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 1); --Clean up. IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; DECLARE @SQL nvarchar(max) SET @SQL= 'CREATE DATABASE $(DatabaseName) ON PRIMARY ( NAME = N''$(DatabaseName)'', FILENAME = N''' + @Default_Data_Path + N'$(DatabaseName)' + '.mdf' + ''', SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N''$(DatabaseName)Log'', FILENAME = N''' + @Default_Log_Path + N'$(DatabaseName)' + '.ldf' + ''', SIZE = 1024KB , FILEGROWTH = 10%) ' exec (@SQL) GO And with that, your database is created.  You can run these scripts on any server and on any database name.  To do that, I created an MSBuild script that looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <DatabaseName>MyDatabase</DatabaseName> <Server>localhost</Server> <SqlCmd>sqlcmd -v DatabaseName=$(DatabaseName) -S $(Server) -i </SqlCmd> <ScriptDirectory>.\Scripts</ScriptDirectory> </PropertyGroup> <Target Name ="Rebuild"> <ItemGroup> <ScriptFiles Include="$(ScriptDirectory)\*.sql"/> </ItemGroup> <Exec Command="$(SqlCmd) &quot;%(ScriptFiles.Identity)&quot;" ContinueOnError="false"/> </Target> </Project> Note that the Scripts directory is underneath the directory where I’m running the msbuild command and is relative to that directory.  Note also that the target is using batching to run each script in the scripts subdirectory, one after the other.  Each script is passed to the sqlcmd command line execution using the .Identity property on the itemgroup that is created.  This target file is saved in the file “Database.target”. To make this work, you’ll need msbuild in your path, and then run the following command: msbuild database.target /target:Rebuild Once you’ve got your virgin database setup, you’d then need to use a tool like dbdeploy.net to determine that it was a virgin database, build a change script based on the change scripts, and then you’d want another sqlcmd call to update the database with the appropriate scripts.  I’m doing that next, so I’ll post a blog update when I’ve got it working. Technorati Tags: MSBuild,Agile,CI,Database

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • Oracle VM RAC template - what it took

    - by wcoekaer
    In my previous posting I introduced the latest Oracle Real Application Cluster / Oracle VM template. I mentioned how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. In fact, you don't need any prior knowledge at all to get a complete production-ready setup going. Here is an example... I built a 4 node RAC cluster, completely configured in just over 40 minutes - starting from import template into Oracle VM, create VMs to fully up and running Oracle RAC. And what was needed? 1 textfile with some hostnames and ip addresses and deploycluster.py. The setup is a 4 node cluster where each VM has 8GB of RAM and 4 vCPUs. The shared ASM storage in this case is 100GB, 5 x 20GB volumes. The VM names are racovm.0-racovm.3. The deploycluster script starts the VMs, verifies the configuration and sends the database cluster configuration info through Oracle VM Manager to the 4 node VMs. Once the VMs are up and running, the first VM starts the actual Oracle RAC setup inside and talks to the 3 other VMs. I did not log into any VM until after everything was completed. In fact, I connected to the database remotely before logging in at all. # ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini Oracle RAC OneCommand (v1.1.0) for Oracle VM - deploy cluster - (c) 2011-2012 Oracle Corporation (com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1100:v1.1.0) - v2.4.3 - wopr8.wimmekes.net (x86_64) Invoked as root at Sat Jun 2 17:31:29 2012 (size: 37500, mtime: Wed May 16 00:13:19 2012) Using: ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting... Password: INFO: Attempting to connect to Oracle VM Manager... INFO: Oracle VM Client (3.1.1.305) protocol (1.8) CONNECTED (tcp) to Oracle VM Manager (3.1.1.336) protocol (1.8) IP (192.168.1.40) UUID (0004fb0000010000cbce8a3181569a3e) INFO: Inspecting /root/rac/deploycluster/netconfig.ini for number of nodes defined... INFO: Detected 4 nodes in: /root/rac/deploycluster/netconfig.ini INFO: Located a total of (4) VMs; 4 VMs with a simple name of: ['racovm.0', 'racovm.1', 'racovm.2', 'racovm.3'] INFO: Verifying all (4) VMs are in Running state INFO: VM with a simple name of "racovm.0" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.1" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.2" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.3" is in a Stopped state, attempting to start it...OK. INFO: Detected that all (4) VMs specified on command have (5) common shared disks between them (ASM_MIN_DISKS=5) INFO: The (4) VMs passed basic sanity checks and in Running state, sending cluster details as follows: netconfig.ini (Network setup): /root/rac/deploycluster/netconfig.ini buildcluster: yes INFO: Starting to send cluster details to all (4) VM(s)....... INFO: Sending to VM with a simple name of "racovm.0".... INFO: Sending to VM with a simple name of "racovm.1"..... INFO: Sending to VM with a simple name of "racovm.2"..... INFO: Sending to VM with a simple name of "racovm.3"...... INFO: Cluster details sent to (4) VMs... Check log (default location /u01/racovm/buildcluster.log) on build VM (racovm.0)... INFO: deploycluster.py completed successfully at 17:32:02 in 33.2 seconds (00m:33s) Logfile at: /root/rac/deploycluster/deploycluster2.log my netconfig.ini # Node specific information NODE1=db11rac1 NODE1VIP=db11rac1-vip NODE1PRIV=db11rac1-priv NODE1IP=192.168.1.56 NODE1VIPIP=192.168.1.65 NODE1PRIVIP=192.168.2.2 NODE2=db11rac2 NODE2VIP=db11rac2-vip NODE2PRIV=db11rac2-priv NODE2IP=192.168.1.58 NODE2VIPIP=192.168.1.66 NODE2PRIVIP=192.168.2.3 NODE3=db11rac3 NODE3VIP=db11rac3-vip NODE3PRIV=db11rac3-priv NODE3IP=192.168.1.173 NODE3VIPIP=192.168.1.174 NODE3PRIVIP=192.168.2.4 NODE4=db11rac4 NODE4VIP=db11rac4-vip NODE4PRIV=db11rac4-priv NODE4IP=192.168.1.175 NODE4VIPIP=192.168.1.176 NODE4PRIVIP=192.168.2.5 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=wimmekes.net DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=db11vip SCANIP=192.168.1.57 last few lines of the in-VM log file : 2012-06-02 14:01:40:[clusterstate:Time :db11rac1] Completed successfully in 2 seconds (0h:00m:02s) 2012-06-02 14:01:40:[buildcluster:Done :db11rac1] Build 11gR2 RAC Cluster 2012-06-02 14:01:40:[buildcluster:Time :db11rac1] Completed successfully in 1779 seconds (0h:29m:39s) From start_vm to completely configured : 29m:39s. The other 10m was the import template and create 4 VMs from template along with the shared storage configuration. This consists of a complete Oracle 11gR2 RAC database with ASM, CRS and the RDBMS up and running on all 4 nodes. Simply connect and use. Production ready. Oracle on Oracle.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42  | Next Page >