Search Results

Search found 2980 results on 120 pages for 'accounts payable'.

Page 115/120 | < Previous Page | 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Linux fsck.ext3 says "Device or resource busy" although I did not mount the disk.

    - by matnagel
    I am running an ubuntu 8.04 server instance with a 8GB virtual disk on vmware 1.0.9. For disk maintenance I made a copy of the virtual disk (by making a copy of the 2 vmdk files of sda on the stopped vm on the host) and added it to the original vm. Now this vm has it's original virtual disk sda plus a 1:1 copy (sdd). There are 2 additional disk sdb and sdc which I ignore.) I would expect sdb not to be mounted when I start the vm. So I try tp do a ext2 fsck on sdd from the running vm, but it reports fsck reported that sdb was mounted. $ sudo fsck.ext3 -b 8193 /dev/sdd e2fsck 1.40.8 (13-Mar-2008) fsck.ext3: Device or resource busy while trying to open /dev/sdd Filesystem mounted or opened exclusively by another program? The "mount" command does not tell me sdd is mounted: $ sudo mount /dev/sda1 on / type ext3 (rw,relatime,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) /sys on /sys type sysfs (rw,noexec,nosuid,nodev) varrun on /var/run type tmpfs (rw,noexec,nosuid,nodev,mode=0755) varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777) udev on /dev type tmpfs (rw,mode=0755) devshm on /dev/shm type tmpfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdc1 on /mnt/r1 type ext3 (rw,relatime,errors=remount-ro) /dev/sdb1 on /mnt/k1 type ext3 (rw,relatime,errors=remount-ro) securityfs on /sys/kernel/security type securityfs (rw) When I ignore the warning and continue the fsck, it reported many errors. How do I get this under control? Is there a better way to figure out if sdd is mounted? Or how is it "busy? How to unmount it then? How to prevent ubuntu from automatically mounting. Or is there something else I am missing? Also from /var/log/syslog I cannot see it is mounted, this is the last part of the startup sequence: kernel: [ 14.229494] ACPI: Power Button (FF) [PWRF] kernel: [ 14.230326] ACPI: AC Adapter [ACAD] (on-line) kernel: [ 14.460136] input: PC Speaker as /devices/platform/pcspkr/input/input3 kernel: [ 14.639366] udev: renamed network interface eth0 to eth1 kernel: [ 14.670187] eth1: link up kernel: [ 16.329607] input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/ kernel: [ 16.367540] parport_pc 00:08: reported by Plug and Play ACPI kernel: [ 16.367670] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE] kernel: [ 19.425637] NET: Registered protocol family 10 kernel: [ 19.437550] lo: Disabled Privacy Extensions kernel: [ 24.328857] loop: module loaded kernel: [ 24.449293] lp0: using parport0 (interrupt-driven). kernel: [ 26.075499] EXT3 FS on sda1, internal journal kernel: [ 28.380299] kjournald starting. Commit interval 5 seconds kernel: [ 28.381706] EXT3 FS on sdc1, internal journal kernel: [ 28.381747] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 28.444867] kjournald starting. Commit interval 5 seconds kernel: [ 28.445436] EXT3 FS on sdb1, internal journal kernel: [ 28.445444] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 31.309766] eth1: no IPv6 routers present kernel: [ 35.054268] ip_tables: (C) 2000-2006 Netfilter Core Team mysqld_safe[4367]: started mysqld[4370]: 100124 14:40:21 InnoDB: Started; log sequence number 0 10130914 mysqld[4370]: 100124 14:40:21 [Note] /usr/sbin/mysqld: ready for connections. mysqld[4370]: Version: '5.0.51a-3ubuntu5.4' socket: '/var/run/mysqld/mysqld.sock' port: 3 /etc/mysql/debian-start[4417]: Upgrading MySQL tables if necessary. /etc/mysql/debian-start[4422]: Looking for 'mysql' in: /usr/bin/mysql /etc/mysql/debian-start[4422]: Looking for 'mysqlcheck' in: /usr/bin/mysqlcheck /etc/mysql/debian-start[4422]: This installation of MySQL is already upgraded to 5.0.51a, u /etc/mysql/debian-start[4436]: Checking for insecure root accounts. /etc/mysql/debian-start[4444]: Checking for crashed MySQL tables.

    Read the article

  • AdPrep logs show an LDAP error

    - by Omar
    What I am trying to do is transition our domain from Server 2003 Enterprise x32 to Server 2008 R2 Enterprise x64. Here is what I have done thus far. The 2003 server is a physical machine, the 2008 server is a virtual machine Built a virtual machine that has Server 2008 R2 Enterprise x64 and joined it to the domain as a domain member On the 2003 DC, Raised Domain Functional Level and Forest Functional Level to Windows Server 2003 On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is 30 On the 2003 DC, inserted the Windows Server 2008 Enterprise x32 Edition to copy over the adprep folder. This version is the only one that seemed to work On the 2003 DC, opened command prompt and went to adprep directory and ran adprep /forestprep , adprep /domainprep , and adprep /domainprep /gpprep On the 2008 server, Installed the Active Directory Domain Services role from Server Manager On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is now 44 When I go to run dcpromo on the 2008 server, I get a message that says: "To install a domain controller into this Active Directory forest, you must first prepare using adprep /forestprep" I went back to the 2003 DC server and went through the adprep logs and I came across this: Adprep was unable to modify the security descriptor on object CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. [Status/Consequence] ADPREP was unable to merge the existing security descriptor with the new access control entry (ACE). [User Action] Check the log file ADPrep.log in the C:\WINDOWS\debug\adprep\logs\20100327143517 directory for more information. Adprep encountered an LDAP error. *Error code: 0x20. Server extended error code: 0x208d, Server error message: 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com* In fact, I got three of these errors. The LDAP error is consistent with all three, but the top part where it says "Adprep was unable to modify the security descriptor on object" are different. They are the following: CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=DirectoryEmailReplication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=KerberosAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. The credentials I am using on the 2008 server when running dcpromo is my domain account. My account is part of the domain and enterprise admin groups. I've tried various quick fixes that I've came across through Google searches that include: Disabling AntiVirus on current DCs Pointing DNS on PDC to point to itself Changing the Schema Update Allowed key to 1 and tried rerunning adprep - when rerunning adprep, told me that Forest-wide information has already been updated Disabled Windows Firewall on the Server 2008 box On the 2003 DC, went to Domain Controller Security Policy Local Policies User Rights Assignment and added Domain Admins to the Enable computer and user accounts to be trusted for delegation policy setting Both our PDC and BDC are Global Catalog Servers. Not sure if this matters or not I ran the command netdom query fsmo and verified that the FSMO role holder is the current 2003 PDC I ran dcdiag /v on the 2003 PDC and the only thing that failed was Services. Dnscache Service is stopped on the PDC I even went as far as deleting the virtual machine and recreating it from scratch - no avail... Help :(

    Read the article

  • How do you handle authentication across domains?

    - by William Ratcliff
    I'm trying to save users of our services from having to have multiple accounts/passwords. I'm in a large organization and there's one group that handles part of user authentication for users who are from outside the facility (primarily for administrative functions). They store a secure cookie to establish a session and communicate only via HTTPS via the browser. Sessions expire either through: 1) explicit logout of the user 2) Inactivity 3) Browser closes My team is trying to write a web application to help users analyze data that they've taken (or are currently taking) while at our facility. We need to determine if a user is 1) authenticated 2) Some identifier for that user so we can store state for them (what analysis they are working on, etc.) So, the problem is how do you authenticate across domains (the authentication server for the other application lives in a border region between public and private--we will live in the public region). We have come up with some scenarios and I'd like advice about what is best practice, or if there is one we haven't considered. Let's start with the case where the user is authenticated with the authentication server. 1) The authentication server leaves a public cookie in the browser with their primary key for a user. If this is deemed sensitive, they encrypt it on their server and we have the key to decrypt it on our server. When the user visits our site, we check for this public cookie. We extract the user_id and use a public api for the authentication server to request if the user is logged in. If they are, they send us a response with: response={ userid :we can then map this to our own user ids. If necessary, we can request additional information such as email-address/display name once (to notify them if long running jobs are done, or to share results with other people, like with google_docs). account_is_active:Make sure that the account is still valid session_is_active: Is their session still active? If we query this for a valid user, this will have a side effect that we will reset the last_time_session_activated value and thus prolong their session with the authentication server last_time_session_activated: let us know how much time they have left ip_address_session_started_from:make sure the person at our site is coming from the same ip as they started the session at } Given this response, we either accept them as authenticated and move on with our app, or redirect them to the login page for the authentication server (question: if we give an encrypted portion of the response (signed by us) with the page to redirect them to, do we open any gaping security holes in the authentication server)? The flaw that we've found with this is that if the user visits evilsite.com and they look at the session cookie and send a query to the public api of the authentication server, they can keep the session alive and if our original user leaves the machine without logging out, then the next user will be able to access their session (this was possible before, but having the session alive eternally makes this worse). 2) The authentication server redirects all requests made to our domain to us and we send responses back through them to the user. Essentially, they act as a proxy. The advantage of this is that we can handshake with the authentication server, so it's safe to be trusted with the email address/name of the user and they don't have to reenter it So, if the user tries to go to: authentication_site/mysite_page1 they are redirected to mysite. Which would you choose, or is there a better way? The goal is to minimize the "Yet Another Password/Yet another username" problem... Thanks!!!!

    Read the article

  • Useful Command-line Commands on Windows

    - by Sung Meister
    The aim for this Wiki is to promote using a command to open up commonly used applications without having to go through many mouse clicks - thus saving time on monitoring and troubleshooting Windows machines. Answer entries need to specify Application name Commands Screenshot (Optional) Shortcut to commands && - Command Chaining %SYSTEMROOT%\System32\rcimlby.exe -LaunchRA - Remote Assistance (Windows XP) appwiz.cpl - Programs and Features (Formerly Known as "Add or Remove Programs") appwiz.cpl @,2 - Turn Windows Features On and Off (Add/Remove Windows Components pane) arp - Displays and modifies the IP-to-Physical address translation tables used by address resolution protocol (ARP) at - Schedule tasks either locally or remotely without using Scheduled Tasks bootsect.exe - Updates the master boot code for hard disk partitions to switch between BOOTMGR and NTLDR cacls - Change Access Control List (ACL) permissions on a directory, its subcontents, or files calc - Calculator chkdsk - Check/Fix the disk surface for physical errors or bad sectors cipher - Displays or alters the encryption of directories [files] on NTFS partitions cleanmgr.exe - Disk Cleanup clip - Redirects output of command line tools to the Windows clipboard cls - clear the command line screen cmd /k - Run command with command extensions enabled color - Sets the default console foreground and background colors in console command.com - Default Operating System Shell compmgmt.msc - Computer Management control.exe /name Microsoft.NetworkAndSharingCenter - Network and Sharing Center control keyboard - Keyboard Properties control mouse(or main.cpl) - Mouse Properties control sysdm.cpl,@0,3 - Advanced Tab of the System Properties dialog control userpasswords2 - Opens the classic User Accounts dialog desk.cpl - opens the display properties devmgmt.msc - Device Manager diskmgmt.msc - Disk Management diskpart - Disk management from the command line dsa.msc - Opens active directory users and computers dsquery - Finds any objects in the directory according to criteria dxdiag - DirectX Diagnostic Tool eventvwr - Windows Event Log (Event Viewer) explorer . - Open explorer with the current folder selected. explorer /e, . - Open explorer, with folder tree, with current folder selected. F7 - View command history find - Searches for a text string in a file or files findstr - Find a string in a file firewall.cpl - Opens the Windows Firewall settings fsmgmt.msc - Shared Folders fsutil - Perform tasks related to FAT and NTFS file systems ftp - Transfers files to and from a computer running an FTP server service getmac - Shows the mac address(es) of your network adapter(s) gpedit.msc - Group Policy Editor gpresult - Displays the Resultant Set of Policy (RSoP) information for a target user and computer httpcfg.exe - HTTP Configuration Utility iisreset - To restart IIS InetMgr.exe - Internet Information Services (IIS) Manager 7 InetMgr6.exe - Internet Information Services (IIS) Manager 6 intl.cpl - Regional and Language Options ipconfig - Internet protocol configuration lusrmgr.msc - Local Users and Groups Administrator msconfig - System Configuration notepad - Notepad? ;) mmsys.cpl - Sound/Recording/Playback properties mode - Configure system devices more - Displays one screen of output at a time mrt - Microsoft Windows Malicious Software Removal Tool mstsc.exe - Remote Desktop Connection nbstat - displays protocol statistics and current TCP/IP connections using NBT ncpa.cpl - Network Connections netsh - Display or modify the network configuration of a computer that is currently running netstat - Network Statistics net statistics - Check computer up time net stop - Stops a running service. net use - Connects a computer to or disconnects a computer from a shared resource, or displays information about computer connections odbcad32.exe - ODBC Data Source Administrator pathping - A traceroute that collects detailed packet loss stats perfmon - Opens Reliability and Performance Monitor ping - Determine whether a remote computer is accessible over the network powercfg.cpl - Power management control panel applet quser - Display information about user sessions on a terminal server qwinsta - See disconnected remote desktop sessions reg.exe - Console Registry Tool for Windows regedit - Registry Editor rasdial - Connects to a VPN or a dialup network robocopy - Backup/Restore/Copy large amounts of files reliably rsop.msc - Resultant Set of Policy (shows the combined effect of all group policies active on the current system/login) runas - Run specific tools and programs with different permissions than the user's current logon provides sc - Manage anything you want to do with services. schtasks - Enables an administrator to create, delete, query, change, run and end scheduled tasks on a local or remote system. secpol.msc - Local Security Settings services.msc - Services control panel set - Displays, sets, or removes cmd.exe environment variables. set DIRCMD - Preset dir parameter in cmd.exe start - Starts a separate window to run a specified program or command start. - opens the current directory in the Windows Explorer. shutdown.exe - Shutdown or Reboot a local/remote machine subst.exe - Associates a path with a drive letter, including local drives systeminfo -Displays a comprehensive information about the system taskkill - terminate tasks by process id (PID) or image name tasklist.exe - List Processes on local or a remote machine taskmgr.exe - Task Manager telephon.cpl - Telephone and Modem properties timedate.cpl - Date and Time title - Change the title of the CMD window you have open tracert - Trace route wmic - Windows Management Instrumentation Command-line winver.exe - Find Windows Version wscui.cpl - Windows Security Center wuauclt.exe - Windows Update AutoUpdate Client

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a new account

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. ***UPDATE More details in testing. This Sharepoint is in an AD Child domain (clients.mycompany.local), which is sub to the root of the AD tree (mycompany.local). The user is in the parent tree (as are half of the other functional users. I have elevated the user rights to Domain. In looking at the logs, it seems that the Sharepoint server is trying to authenticate them by querying the DC for the clients domain (which is the way it normally works and still works for all existing identically configured users). I think if I could force it to authenticate up to the top domain DC then it would be ok?? I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. Logs by request from matt b Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 7888 Task Category: Office Server General Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: A runtime exception was detected. Details follow. Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) – TrevJen 19 hours ago Techinal Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) at Microsoft.SharePoint.SPGlobal.HandleUnauthorizedAccessException(UnauthorizedAccessException ex) at Microsoft.SharePoint.Library.SPRequest.UpdateField(String bstrUrl, String bstrListName, String bstrXML) at Microsoft.SharePoint.SPField.UpdateCore(Boolean bToggleSealed) – TrevJen 19 hours ago at Microsoft.SharePoint.SPField.Update() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.PushSchemaToList(Boolean& bAddedColumn) at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.SynchFull() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.Synch() at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock) – TrevJen 19 hours ago Log Name: Application Source: Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 5553 Task Category: User Profiles Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: failure trying to synch site 6fea15e2-0899-4c19-9016-44d77834c018 for ContentDB b2002b0b-3d4c-411a-8c4f-3d047ca9322c WebApp 3aff7051-455d-4a70-a377-5b1c36df618e. Exception message was Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)). – TrevJen 18 hours ago

    Read the article

  • UFW as an active service on Ubuntu

    - by lamcro
    Every time I restart my computer, and check the status of the UFW firewall (sudo ufw status), it is disabled, even if I then enable and restart it. I tried putting sudo ufw enable as one of the startup applications but it asks for the sudo password every time I log on, and I'm guessing it does not protect anyone else who logs on my computer. How can I setup ufw so it is activated when I turn on my computer, and protects all accounts? Update I just tried /etc/init.d/ufw start, and it activated the firewall. Then I restarted the computer, and again it was disabled. content of /etc/ufw/ufw.conf # /etc/ufw/ufw.conf # # set to yes to start on boot ENABLED=yes # set to one of 'off', 'low', 'medium', 'high' LOGLEVEL=full content of /etc/default/ufw # /etc/default/ufw # # Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback # accepted). You will need to 'disable' and then 'enable' the firewall for # the changes to take affect. IPV6=no # Set the default input policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW inbound packets on the INPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_INPUT_POLICY="DROP" # Set the default output policy to ACCEPT, ACCEPT_NO_TRACK, DROP, or REJECT. # ACCEPT enables connection tracking for NEW outbound packets on the OUTPUT # chain, whereas ACCEPT_NO_TRACK does not use connection tracking. Please note # that if you change this you will most likely want to adjust your rules. DEFAULT_OUTPUT_POLICY="ACCEPT" # Set the default forward policy to ACCEPT, DROP or REJECT. Please note that # if you change this you will most likely want to adjust your rules DEFAULT_FORWARD_POLICY="DROP" # Set the default application policy to ACCEPT, DROP, REJECT or SKIP. Please # note that setting this to ACCEPT may be a security risk. See 'man ufw' for # details DEFAULT_APPLICATION_POLICY="SKIP" # By default, ufw only touches its own chains. Set this to 'yes' to have ufw # manage the built-in chains too. Warning: setting this to 'yes' will break # non-ufw managed firewall rules MANAGE_BUILTINS=no # # IPT backend # # only enable if using iptables backend IPT_SYSCTL=/etc/ufw/sysctl.conf # extra connection tracking modules to load IPT_MODULES="nf_conntrack_ftp nf_nat_ftp nf_conntrack_irc nf_nat_irc" Update Followed your advise and ran update-rc.d with no luck. lester@mcgrath-pc:~$ sudo update-rc.d ufw defaults update-rc.d: warning: /etc/init.d/ufw missing LSB information update-rc.d: see <http://wiki.debian.org/LSBInitScripts> Adding system startup for /etc/init.d/ufw ... /etc/rc0.d/K20ufw -> ../init.d/ufw /etc/rc1.d/K20ufw -> ../init.d/ufw /etc/rc6.d/K20ufw -> ../init.d/ufw /etc/rc2.d/S20ufw -> ../init.d/ufw /etc/rc3.d/S20ufw -> ../init.d/ufw /etc/rc4.d/S20ufw -> ../init.d/ufw /etc/rc5.d/S20ufw -> ../init.d/ufw lester@mcgrath-pc:~$ ls -l /etc/rc?.d/*ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc0.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc1.d/K20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc2.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc3.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc4.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc5.d/S20ufw -> ../init.d/ufw lrwxrwxrwx 1 root root 13 2009-12-20 20:34 /etc/rc6.d/K20ufw -> ../init.d/ufw

    Read the article

  • Connection Timed Out - Simple outbound Postfix for PHP Contact form

    - by BLaZuRE
    Alright, so I only got Postfix for a PHP contact form that will send email to a single . I only want it to send out mail to a single external address ([email protected]). I have domain sub1.sub2.domain.com. I installed Postfix out of the Ubuntu repo, with minimal config changes. I cannot get Postfix to send mail externally (though it succeeds for internal accounts, which is unnecessary). The email simply defers if I generate an email using PHP mail(). If I try to form my own in telnet, right after rcpt to: [email][email protected][/email], I get a postfix/smtpd[31606]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 <[email protected]>: Recipient address rejected: example.com; from=<root@localhost> to=<[email protected]> proto=ESMTP helo=<localhost> when commenting out default_transport = error and relay_transport = error lines, I get the following: Jun 26 14:33:00 sub1 postfix/smtp[12191]: 2DA06F88206A: to=<[email protected]>, relay=none, delay=514, delays=409/0.01/105/0, dsn=4.4.1, status=deferred (connect to aspmx3.googlemail.com[74.125.127.27]:25: Connection timed out) Jun 26 14:36:36 sub1 postfix/smtp[12225]: connect to mta7.am0.yahoodns.net[98.139.175.224]:25: Connection timed out Jun 26 14:38:00 sub1 postfix/smtp[12225]: 22952F88208E: to=<[email protected]>, relay=none, delay=655, delays=550/0.01/105/0, dsn=4.4.1, status=deferred (connect to mta5.am0.yahoodns.net[67.195.168.230]:25: Connection timed out) My main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = sub1.sub2.domain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = sub1.sub2.domain.com, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all default_transport = error relay_transport = error Also, a dig sub1.sub2.domain.com MX returns: ; <<>> DiG 9.7.0-P1 <<>> sub1.sub2.domain.com MX ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4853 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;sub1.sub2.domain.com. IN MX ;; AUTHORITY SECTION: sub2.domain.com. 600 IN SOA sub2.domain.com. sub5.domain.com. 2012062915 7200 600 1209600 600 ;; Query time: 0 msec ;; SERVER: x.x.x.x#53(x.x.x.x) ;; WHEN: Fri Jun 29 16:35:00 2012 ;; MSG SIZE rcvd: 84 lsof -i returns empty netstat -t -a | grep LISTEN returns tcp 0 0 localhost:mysql *:* LISTEN tcp 0 0 *:ftp *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost:ipp *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp6 0 0 [::]:netbios-ssn [::]:* LISTEN tcp6 0 0 [::]:www [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN tcp6 0 0 localhost:ipp [::]:* LISTEN tcp6 0 0 [::]:microsoft-ds [::]:* LISTEN

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • VPS 512 MB RAM with WordPressMU comes to consumes lots of memory

    - by CAPitalZ
    I have googled for days and gathered all optimization suggestions and tried. My sites are not getting any high hits. May be like 100 hits per day [all my sites combined]. Here are my specs I have 512 MB RAM VPS with burstable 1024 MB. Centos 5 32-bit & cPanel/WHM Apache 2.2 MySQL 5.0 PHP 5.3.2 Here is my Configs I have 2 WordPressMU production sites, and 1 test site my.cnf # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/lib/mysql/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking skip-bdb skip-innodb key_buffer = 16M max_allowed_packet = 1M table_cache = 64 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M #CAPitalZ thread_cache_size=8 thread_concurrency=4 #query_cache_type=1 #query_cache_limit=1M query_cache_size=16M concurrent_insert=2 low_priority_updates=1 max_connections=50 tmp_table_size=16M max_heap_table_size=16M join_buffer_size=1M interactive_timeout=25 wait_timeout=1000 #connect_timout=10 not able to restart mysql max_connect_errors=10 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # skip-networking # Disable Federated by default skip-federated # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 [mysqld_safe] open_files_limit=8192 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout httpd.conf I have unselected many modules and recompiled using EasyApache in WHM. Only have the following modules built Deflate Expires Fileprotect Imagemap MPM Prefork Version [default] EAccelerator for PHP Bcmath Calendar CurlSSL [I'm using Curl. But I don't have any https sites] Expat GD [for image cropping] Gettext Imap Mbregex [default] Mbstring [need both Mbregex and Mbstring for utf-8] Mysql of the system MySQL "Improved" extension. Sockets TTF (FreeType) [I'm using custom font] Zlib Under Global Configuration I only have FollowSymLinks enabled I Have TraceEnable, ServerSignature, FileETag OFF ServerTokens ProductOnly DirectoryIndex Priority has index.php as the first one I have removed Clamd [Clam Anti-virus] SpamAssasin is Off Under Tweak Settings Default catch-all/default address behavior for new accounts. This is set to "fail" All stats programs turned off I have eAccelerator installed and checked in phpinfo and its working [Pre VirtualHost Include under WHM] Timeout 20 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 3 MinSpareServers 1 MaxSpareServers 3 StartServers 1 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 4000 ExtendedStatus Off #ServerType standalone this throws error HostnameLookups Off <Directory "/"> AllowOverride None </Directory> My sites will take ages to load and WHM/CPanel will not even load. adadaa.com/ http://adadaa.net/ kadais.ca/ My average memory consumption is like 1000 MB! [yes always bursting] The process that consumes most CPU and also most memory is mysql But I also get like 15 httpd processes [when its bursting] I already got warning from cpuwatchcheck saying "While processing, the cpu has been maxed out for more than a 6 hour period. The current load/uptime line on the server at the time of this email is 07:00:37 up 11:30, 0 users, load average: 14.64, 16.79, 20.07" I don't know, I have tried switching these config values many different times, but nothing seems to work. Please show some light... Thanks

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • Backup, Migrate or Clone Failing CentOS 4 (LVM)

    - by Hegelworm
    I've been running a BlueQuartz CentOS 4 system (Nuonce.net distro) for a few years now and although the hard drive (Deskstar) has always been a bit noisy, on a few recent occasions I've heard it having trouble spinning up. Basically, I want to clone this drive to a similar sized one (80 Gig). I've spent many hours reading upon dd, dd_rescue, rsync, clonezilla and LVM mirroring yet the sheer number of options and nightmarish accounts has left me frozen - unable to make an informed decision as to how to start. I've made a few attempts. dd failed after about 2 hours, as, although the drives appeared to be identical on the surface (ATA Seagate Barracudas, Thai not Chinese), the destination drive is slightly smaller. My most recent attempt involved using a Debian CD to format the new drive and then rsync-ing everything over and editing the new drive's grub and fstab to reflect the changes. No joy here either as I hadn't chosen LVM when partitioning the destination drive and it wouldn't boot. As you can probably tell, I'm out of my depth here and a panic-invoking mixture of caution and frustration has prompted me to sign up here. The server itself, although not strictly a production environment, has a very specific installation of Festival, LAME and ffMpeg and provides the back-end for a Text-to-Speech jQuery plugin that I've built over the last 2 years. I'm also planning to rebuild the whole TTS system on Debian as the existing CentOS system still has PHP4 etc. For now though, I'd really like to just shift everything over to a new drive. As this is my first post, please feel free to lay any house rules on me that I might've overlooked; I've been hovering around StackOverflow for a while now but have only just signed up. Many thanks. Update: Thanks for your responses so far - it's much appreciated and makes me feel a little more confident when I can double-check things here. I had the idea of doing a fresh install of CentOS (from the original disk) on the new drive so the partitions and LVM were all set up correctly (after disconnecting my source drive to prevent painful mistakes). I then booted into rescue mode from the same CD, and, to avoid a conflicting label, changed the /boot partition's label using e2label to /bootnew. I then changed the VolGroup name using lvm vgrename from VolGroup00 to VolGroup001. I could then boot with both drives in. After mounting the new drive (via its VolGroup001 alias) into /newhd, I rsync-ed over everything I could to the new drive, using -avr switches and backslashes. Like mentioned here. I then disconnected my original source drive again, booted from the liveCD again, changed back the boot partition label from /bootnew to /boot using e2label and then renamed the VolGroup back to VolGroup00. I then rebooted and it went through the familiar start-up routine only to not find a host of files in proc, usr, lib, var etc. The boot did complete but there were lots of red 'FAILS'. I could log in with my existing creds, but the network was kaput, I couldn't startX (desktop GUI) and there were also a few (a lot) of error messages pertaining to iptables. Back to square one. I naively thought I'd nailed it. Shall I just buy a bigger hard drive and attempt the dd route? I've read that this can mess with LVM setups and there's the added risk of working on two unmounted drives at once with a low-level tool. Thanks again.

    Read the article

  • Week in Geek: LastPass Rescues Xmarks Edition

    - by Asian Angel
    This week we learned how to breathe new life into an aging Windows Mobile 6.x device, use filters in Photoshop, backup and move VirtualBox machines, use the BitDefender Rescue CD to clean an infected PC, and had fun setting up a pirates theme on our computers. Photo by _nash. Weekly Feature Do you love using the Faenza icon set on your Ubuntu system but feel that there are a few much needed icons missing (or you desire a different version of a particular icon)? Then you may want to take a look at the Faenza Variants icon pack. The icons are available in the following sizes: 16px, 22px, 32px, 48px and scalable sizes. Photo by Asian Angel. Faenza Variants Random Geek Links Another week with extra link goodness to help keep you on top of the news. Photo by Asian Angel. LastPass acquires Xmarks, premium service announced Xmarks announced that it has been acquired by LastPass, a cross-platform password management service. This also means that Xmarks is now in transition from a “free” to a “freemium” business model. WikiLeaks reappears on European Net domains WikiLeaks has re-emerged on a Swiss Internet domain followed by domains in Germany, Finland, and the Netherlands, sidestepping a move that had in effect taken the controversial site off the Internet. Iran: Yes, Stuxnet hurt our nuclear program The Stuxnet worm got some big play from Iranian President Mahmoud Ahmadinejad, who acknowledged that the malware dinged his nuclear program. More Windows Rogues than Just AV – Fake Defragmenter Check Disk Don’t think for a second that rogues are limited to scareware, because as so-called products such as “System Defragmenter”, “Scan Disk” “Check Disk” prove, they’re not. Internet Explorer’s Protected Mode can be bypassed Researchers from Verizon Business have now described a way of bypassing Protected Mode in IE 7 and 8 in order to gain access to user accounts. Can you really see who viewed your Facebook profile? Rogue application spreads virally Once again, a rogue application is spreading virally between Facebook users pretending to offer you a way of seeing who has viewed your profile. More holes in Palm’s WebOS Researchers Orlando Barrera and Daniel Herrera, who both work for security firm SecTheory, have discovered a gaping security hole in Palm’s WebOS smartphone operating system. Next-gen banking Trojans hit APAC With the proliferation of banking Trojans, Web and smartphone users of online banking services have to be on constant alert to avoid falling prey to fraud schemes, warned Etay Maor, project manager for RSA Fraud Action. AVG update cripples 64-bit computers A signature update automatically deployed by the AVG virus scanner Thursday has crippled numerous computers. Article includes link to forums to fix computers affected after a restart. Congress moves to outlaw ‘mystery charges’ for Web shoppers Legislation that makes it illegal for Web merchants and so-called post-transaction marketers to charge credit cards without the card owners’ say-so came closer to becoming law this week. Ballmer Set to “Look Into” Windows Home Server Drive Extender Fiasco Tuesday’s announcement from Microsoft regarding the removal of Drive Extender from Windows Home Server has sent shock waves across the web. Google tweaks search recipe to ding scam artists Google has changed its search algorithm to penalize sites deemed to provide an “extremely poor user experience” following a New York Times story on a merchant who justified abusive behavior towards customers as a search-engine optimization tactic. Geek Video of the Week Watch as our two friends debate back and forth about the early adoption of new technology through multiple time periods (Stone Age to the far future). Will our reluctant friend finally succumb to the temptation? Photo by CollegeHumor. Early Adopters Through History Random TinyHacker Links Fix Issues in Windows 7 Using Reliability Monitor Learn how to analyze Windows 7 errors and then fix them using the built-in reliability monitor. Learn About IE Tab Groups Tab groups is a useful feature in IE 8. Here’s a detailed guide to what it is all about. Google’s Book Helps You Learn About Browsers and Web A cool new online book by the Google Chrome team on browsers and the web. TrustPort Internet Security 2011 – Good Security from a Less Known Provider TrustPort is not exactly a well-known provider of security solutions. At least not in the consumer space. This review tests in detail their latest offering. How the World is Using Cell phones An infographic showing the shocking demographics of cell phone use. Super User Questions See the great answers to these questions from Super User. I am unable to access my C drive. It says it is unable to display current owner. List of Windows special directories/shortcuts like ‘%TEMP%’ Is using multiple passes for wiping a disk really necessary? How can I view two files side by side in Notepad++ Is there any tool that automatically puts screenshots to my Dropbox? How-To Geek Weekly Article Recap Look through our hottest articles from this past week at How-To Geek. How to Create a Software RAID Array in Windows 7 9 Alternatives for Windows Home Server’s Drive Extender Why Doesn’t Disk Cleanup Delete Everything from the Temp Folder? Ask the Readers: How Much Do You Customize Your Operating System? How to Upload Really Large Files to SkyDrive, Dropbox, or Email One Year Ago on How-To Geek Enjoy reading through these awesome articles from one year ago. How To Upgrade from Vista to Windows 7 Home Premium Edition How To Fix No Aero Transparency in Windows 7 Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista Rename the Guest Account in Windows 7 for Enhanced Security Disable Error Reporting in XP, Vista, and Windows 7 The Geek Note That wraps things up here for this week. Regardless of the weather wherever you may be, we hope that you have an opportunity to get outside and have some fun! Remember to keep sending those great tips in to us at [email protected]. Photo by Tony the Misfit. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • How To Skip Commercials in Windows 7 Media Center

    - by DigitalGeekery
    If you use Windows 7 Media Center to record live TV, you’re probably interested in skipping through commercials. After all, a big reason to record programs is to avoid commercials, right? Today we focus on a fairly simple and free way to get you skipping commercials in no time. In Windows 7, the .wtv file format has replaced the dvr-ms file format used in previous versions of Media Center for Recorded TV. The .wtv file format, however, does not work very well with commercial skipping applications.  The Process Our first step will be to convert the recorded .wtv files to the previously used dvr-ms file format. This conversion will be done automatically by WtvWatcher. It’s important to note that this process deletes the original .wtv file after successfully converting to .dvr-ms. Next, we will use DVRMSToolBox with the DTB Addin to handle commercials skipping. This process does not “cut” or remove the commercials from the file. It merely skips the commercials during playback. WtvWatcher Download and install the WTVWatcher (link below). To install WtvWatcher, you’ll need to have Windows Installer 3.1 and .NET Framework 3.5 SP1. If you get the Publisher cannot be verified warning you can go ahead and click Install. We’ve completely tested this app and it contains no malware and runs successfully.   After installing, the WtvWatcher will pop up in the lower right corner of your screen. You will need to set the path to your Recorded TV directory. Click on the button for “Click here to set your recorded TV path…”   The WtvWatcher Preferences window will open…   …and you’ll be prompted to browse for your Recorded TV folder. If you did not change the default location at setup, it will be found at C:\Users\Public\Recorded TV. Click “OK” when finished. Click the “X” to close the Preferences screen. You should now see WtvWatcher begin to convert any existing WTV files.   The process should only take a few minutes per file. Note: If WtvWatcher detects an error during the conversion process, it will not delete the original WTV file.    You will probably want to run WtvWatcher on startup. This will allow WtvWatcher too constantly scan for new .wtv files to convert. There is no setting in the application to run on startup, so you’ll need to copy the WTV icon from your desktop into your Windows start menu “Startup” directory. To do so, click on Start > All Programs, right-click on Startup and click on Open all users. Drag and drop, or cut and paste, the WtvWatcher desktop shortcut into the Startup folder. DVRMSToolBox and DTBAddIn Next, we need to download and install the DVRMSToolBox and the DTBAddIn. These two pieces of software will do the actual commercial skipping. After downloading the DVRMSToolBox zip file, extract it and double-click the setup.exe file.  Click “Next” to begin the installation.   Unless DVRMSToolBox will only be used by Administrator accounts, check the “Modify File Permissions” box. Click “Next.” When you get to the Optional Components window, uncheck Download/Install ShowAnalyzer. We will not be using that application. When the installation is complete, click “Close.”    Next we need to install the DTBAddin. Unzip the download folder and run the appropriate .msi file for your system. It is available in 32 & 64 bit versions. Just double click on the file and take the default options. Click “Finish” when the install is completed. You will then be prompted to restart your computer. After your computer has restarted, open DVRMSToolBox settings by going to Start > All Programs, DVRMSToolBox, and click on DVRMStoMPEGSettings. On the MC Addin tab, make sure that Skip Commercials is checked. It should be by default.   On the Commercial Skip tab, make sure the Auto Skip option is selected. Click “Save.”   If you try to watch recorded TV before the file conversion and commercial indexing process is complete you’ll get the following message pop up in Media Center. If you click Yes, it will start indexing the commercials if WtvWatcher has already converted it to dvr-ms. Now you’re ready to kick back and watch your recorded tv without having to wait through those long commercial breaks. Conclusion The DVRMSToolBox is a powerful and complex application with a multitude of features and utilities. We’ve showed you a quick and easy way to get your Windows Media Center setup to skip commercials. This setup, like virtually all commercial skipping setups, is not perfect. You will occasionally find a commercial that doesn’t get skipped. Need help getting your Windows 7 PC configured for TV? Check out our previous tutorial on setting up live TV in Windows Media Center. Links Download WTV Watcher Download DVRMSToolBox Download DTBAddin Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Increase Skip and Replay Intervals in Windows 7 Media CenterSchedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Remote Desktop to Your Azure Virtual Machine

    - by Shaun
    The Windows Azure Team had just published their new development portal this week and the SDK 1.3. Within this new release there are a lot of cool feature available. The one I’m looking forward to is Remote Desktop Access to your running Windows Azure Virtual Machine.   Configuration Remote Desktop Access It would be very simple to make the azure service enable the remote desktop access. First of all let’s create a new windows azure project from the Visual Studio. In this example I just created a normal MVC 2 web role without any modifications. Then we right-click the azure project node in the solution explorer window and select “Publish”. Then let’s select the “Deploy your Windows Azure project to Windows Azure” on the top radio button. And then select the credential, deployment service/slot, storage and label as susal. You must have the Management API Certificates uploaded to your Windows Azure account, and install the certification on you machine before in order to use this one-click deployment feature. If you are familiar with this dialog you will notice that there’s a linkage named “Configure Remote Desktop connections”. Here is where you need to make this service enable the remote desktop feature. After clicked this link we will set the configuration of the remote desktop access authorization information. There are 4 steps we need to do to configure our access. Certificates: We need either create or select a certificate file in order to encypt the access cerdenticals. In this example I will use the certificate file for my Management API. Username: The remote desktop user name to access the virtual machine. Password: The password for the access. Expiration: The access cerdentals would be expired after 1 month by default but we can amend here. After that we clicked the OK button to back to the publish dialog.   The next step is to back to the new windows azure portal and navigate to the hosted services list. I created a new hosted service and upload the certificate file onto this service. The user name and password access to the azure machine must be encrypted from the local machine, and then send to the windows azure platform, then decrypted on the azure side by the same file. This is why we need to upload the certificate file onto azure. We navigated to the “Hosted Services, Storage Accounts & CDN"” from the left panel and created a new hosted service named “SDK13” and selected the “Certificates” node. Then we clicked the “Add Certificates” button. Then we select the local certificate file and the password to install it into this azure service.   The final step would be back to our Visual Studio and in the pulish dialog just click the OK button. The Visual Studio will upload our package and the configuration into our service with the remote desktop settings.   Remote Desktop Access to Azure Virtual Machine All things had been done, let’s have a look back on the Windows Azure Development Portal. If I selected the web role that I had just published we can see on the toolbar there’s a section named “Remote Access”. In this section the Enable checkbox had been checked which means this role has the Remote Desktop Access feature enabled. If we want to modify the access cerdentals we can simply click the Configure button. Then we can update the user name, password, certificates and the expiration date.   Let’s select the instance node under the web role. In this case I just created one instance for demo. We can see that when we selected the instance node, the Connect button turned enabled. After clicked this button there will be a RDP file downloaded. This is a Remote Desctop configuration file that we can use to access to our azure virtual machine. Let’s download it to our local machine and execute. We input the user name and password we specified when we published our application to azure and then click OK. There might be some certificates warning dislog appeared. This is because the certificates we use to encryption is not signed by a trusted provider. Just select OK in these cases as we know the certificate is safty to us. Finally, the virtual machine of Windows Azure appeared.   A Quick Look into the Azure Virtual Machine Let’s just have a very quick look into our virtual machine. There are 3 disks available for us: C, D and E. Disk C: Store the local resource, diagnosis information, etc. Disk D: System disk which contains the OS, IIS, .NET Frameworks, etc. Disk E: Sotre our application code. The IIS which hosting our webiste on Azure. The IP configuration of the azure virtual machine.   Summary In this post I covered one of the new feature of the Azure SDK 1.3 – Remote Desktop Access. We can set the access per service and all of the instances of this service could be accessed through the remote desktop tool. With this feature we can deep into the virtual machines of our instances to see the inner information such as the system event, IIS log, system information, etc. But we should pay attention to modify the system settings. 2 reasons from what I know for now: 1. If we have more than one instances against our service we should ensure that all system settings we modifed are applied to all instances/virtual machines. Otherwise, as the machines are under the azure load balance proxy our application process may doesn’t work due to the defferent settings between the instances. 2. When the virtual machine encounted some problem and need to be translated to another physical machine all settings we made would be disappeared.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Cloud MBaaS : The Next Big Thing in Enterprise Mobility

    - by shiju
    In this blog post, I will take a look at Cloud Mobile Backend as a Service (MBaaS) and how we can leverage Cloud based Mobile Backend as a Service for building enterprise mobile apps. Today, mobile apps are incredibly significant in both consumer and enterprise space and the demand for the mobile apps is unbelievably increasing in day to day business. An enterprise can’t survive in business without a proper mobility strategy. A better mobility strategy and faster delivery of your mobile apps will give you an extra mileage for your business and IT strategy. So organizations and mobile developers are looking for different strategy for meeting this demand and adopting different development strategy for their mobile apps. Some developers are adopting hybrid mobile app development platforms, for delivering their products for multiple platforms, for fast time-to-market. Others are adopting a Mobile enterprise application platform (MEAP) such as Kony for their enterprise mobile apps for fast time-to-market and better business integration. The Challenges of Enterprise Mobility The real challenge of enterprise mobile apps, is not about creating the front-end environment or developing front-end for multiple platforms. The most important thing of enterprise mobile apps is to expose your enterprise data to mobile devices where the real pain is your business data might be residing in lot of different systems including legacy systems, ERP systems etc., and these systems will be deployed with lot of security restrictions. Exposing your data from the on-premises servers, is not a easy thing for most of the business organizations. Many organizations are spending too much time for their front-end development strategy, but they are really lacking for building a strategy on their back-end for exposing the business data to mobile apps. So building a REST services layer and mobile back-end services, on the top of legacy systems and existing middleware systems, is the key part of most of the enterprise mobile apps, where multiple mobile platforms can easily consume these REST services and other mobile back-end services for building mobile apps. For some mobile apps, we can’t predict its user base, especially for products where customers can gradually increase at any time. And for today’s mobile apps, faster time-to-market is very critical so that spending too much time for mobile app’s scalability, will not be worth. The real power of Cloud is the agility and on-demand scalability, where we can scale-up and scale-down our applications very easily. It would be great if we could use the power of Cloud to mobile apps. So using Cloud for mobile apps is a natural fit, where we can use Cloud as the storage for mobile apps and hosting mechanism for mobile back-end services, where we can enjoy the full power of Cloud with greater level of on-demand scalability and operational agility. So Cloud based Mobile Backend as a Service is great choice for building enterprise mobile apps, where enterprises can enjoy the massive scalability power of their mobile apps, provided by public cloud vendors such as Microsoft Windows Azure. Mobile Backend as a Service (MBaaS) We have discussed the key challenges of enterprise mobile apps and how we can leverage Cloud for hosting mobile backend services. MBaaS is a set of cloud-based, server-side mobile services for multiple mobile platforms and HTML5 platform, which can be used as a backend for your mobile apps with the scalability power of Cloud. The information below provides the key features of a typical MBaaS platform: Cloud based storage for your application data. Automatic REST API services on the application data, for CRUD operations. Native push notification services with massive scalability power. User management services for authenticate users. User authentication via Social accounts such as Facebook, Google, Microsoft, and Twitter. Scheduler services for periodically sending data to mobile devices. Native SDKs for multiple mobile platforms such as Windows Phone and Windows Store, Android, Apple iOS, and HTML5, for easily accessing the mobile services from mobile apps, with better security.  Typically, a MBaaS platform will provide native SDKs for multiple mobile platforms so that we can easily consume the server-side mobile services. MBaaS based REST APIs can use for integrating to enterprise backend systems. We can use the same mobile services for multiple platform so hat we can reuse the application logic to multiple mobile platforms. Public cloud vendors are building the mobile services on the top of their PaaS offerings. Windows Azure Mobile Services is a great platform for a MBaaS offering that is leveraging Windows Azure Cloud platform’s PaaS capabilities. Hybrid mobile development platform Titanium provides their own MBaaS services. LoopBack is a new MBaaS service provided by Node.js consulting firm StrongLoop, which can be hosted on multiple cloud platforms and also for on-premises servers. The Challenges of MBaaS Solutions If you are building your mobile apps with a new data storage, it will be very easy, since there is not any integration challenges you have to face. But most of the use cases, you have to extract your application data in which stored in on-premises servers which might be under VPNs and firewalls. So exposing these data to your MBaaS solution with a proper security would be a big challenge. The capability of your MBaaS vendor is very important as you have to interact with your legacy systems for many enterprise mobile apps. So you should be very careful about choosing for MBaaS vendor. At the same time, you should have a proper strategy for mobilizing your application data which stored in on-premises legacy systems, where your solution architecture and strategy is more important than platforms and tools.  Windows Azure Mobile Services Windows Azure Mobile Services is an MBaaS offerings from Windows Azure cloud platform. IMHO, Microsoft Windows Azure is the best PaaS platform in the Cloud space. Windows Azure Mobile Services extends the PaaS capabilities of Windows Azure, to mobile devices, which can be used as a cloud backend for your mobile apps, which will provide global availability and reach for your mobile apps. Windows Azure Mobile Services provides storage services, user management with social network integration, push notification services and scheduler services and provides native SDKs for all major mobile platforms and HTML5. In Windows Azure Mobile Services, you can write server-side scripts in Node.js where you can enjoy the full power of Node.js including the use of NPM modules for your server-side scripts. In the previous section, we had discussed some challenges of MBaaS solutions. You can leverage Windows Azure Cloud platform for solving many challenges regarding with enterprise mobility. The entire Windows Azure platform can play a key role for working as the backend for your mobile apps where you can leverage the entire Windows Azure platform for your mobile apps. With Windows Azure, you can easily connect to your on-premises systems which is a key thing for mobile backend solutions. Another key point is that Windows Azure provides better integration with services like Active Directory, which makes Windows Azure as the de facto platform for enterprise mobility, for enterprises, who have been leveraging Microsoft ecosystem for their application and IT infrastructure. Windows Azure Mobile Services  is going to next evolution where you can expect some exciting features in near future. One area, where Windows Azure Mobile Services should definitely need an improvement, is about the default storage mechanism in which currently it is depends on SQL Server. IMHO, developers should be able to choose multiple default storage option when creating a new mobile service instance. Let’s say, there should be a different storage providers such as SQL Server storage provider and Table storage provider where developers should be able to choose their choice of storage provider when creating a new mobile services project. I have been used Windows Azure and Windows Azure Mobile Services as the backend for production apps for mobile, where it performed very well. MBaaS Over MEAP Recently, many larger enterprises has been adopted Mobile enterprise application platform (MEAP) for their mobile apps. I haven’t worked on any production MEAP solution, but I heard that developers are really struggling with MEAP in different way. The learning curve for a proprietary MEAP platform is very high. I am completely against for using larger proprietary ecosystem for mobile apps. For enterprise mobile apps, I highly recommend to use native iOS/Android/Windows Phone or HTML5  for front-end with a cloud hosted MBaaS solution as the middleware. A MBaaS service can be consumed from multiple mobile apps where REST APIs are using to integrating with enterprise backend systems. Enterprise mobility should start with exposing REST APIs on the enterprise backend systems and these REST APIs can host on Cloud where we can enjoy the power of Cloud for our services. If you are having REST APIs for your enterprise data, then you can easily build mobile frontends for multiple platforms.   You can follow me on Twitter @shijucv

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on this blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

  • User Guide to Dropbox Shared Folders

    - by Matthew Guay
    Dropbox is an incredibly useful tool for keeping all your files synced between your computers and the cloud.  Here we’re going to look at how you can keep all of your team on the same page with Dropbox shared folders. Creating a Shared Folder Setting up a shared folder in Dropbox is easy.  Add the files you want to share to a folder in Dropbox on your computer, then right-click in the folder, select Dropbox, and then choose Share This Folder.   Alternately, log into your Dropbox account online, click the drop-down menu beside the folder you want to share, and click Share this folder. Now, enter the email addresses of the people you want to share the folder with, and optionally enter a message explaining why you’re sharing the folder. The people you invite will receive an email inviting them to view and join the shared folder.  If they haven’t signed up for Dropbox, they can directly signup; otherwise, they can simply log into their Dropbox account and start adding or editing files. Shared folders have a slightly different icon in your Dropbox.  Notice the shared folder on the left has an icon with 2 people, while the folder on the right that is not shared, shows previews of its contents. See Your Shared Folder’s History Whenever your collaborators with your shared folders add or change files, you will see a tooltip notification telling you what changed. You can also view the changes online.  Log into your Dropbox account in your browser and select the Events tab.  This shows all changes to your Dropbox, but you can view only the changes in your shared folder by selecting its name on the left sidebar. Now you can see all recent changes to your folder, and can also see who added or removed each file.   On the bottom of the page, you can even add a comment that all the collaborators will see. If someone deleted a file you still need, you can restore it by clicking its link in this online history.  Or, you can view any deleted files by right-clicking in your Dropbox folder in Explorer.  Select Dropbox, and then click Show Deleted Files.   Get Notified When a Change is Made You’re not always in front of your computer; you’ve got a life beyond your projects, after all (at least hopefully).  If you really want to stay connected to what’s happening with your project, though, you can easily do that no matter where you are. Your shared Dropbox folder’s history page offers an RSS feed of all changes to the folder.  Click  the Subscribe to this feed hyperlink. Now, in the popup that opens, click “Copy to clipboard” so you can use this RSS feed. You can subscribe to RSS feeds through many web browsers, email clients, dedicated feed readers, and more.  In Firefox, Internet Explorer 7/8, or Opera, you can paste the feed address into your address bar and subscribe to the feed directly in your browser.   However, subscribing to the feed in a desktop application won’t help you much when you’re away from your computer.  One great option is to subscribe in the popular Google Reader.  Then you can check your feed from any browser, on any computer or mobile device. To add your Dropbox feed to Google Reader, log into Google Reader (link below), click Add a subscription on the top left, paste your RSS feed from Dropbox, and click Add.   Now you can see any changes to files or folders in Google Reader. You can even add your feed to your iGoogle homepage.  Click the Add it Now button on the right in the front page of Google Reader to add your feeds to iGoogle.   Now you can see updates on your files from your homepage.  If you’re using a different computer, just login to your Google account to see what’s happening. You can also access your Google Reader feeds from many programs and apps for most major Smartphones including iPhone, Windows Phone, and Blackberry. Receive a Tweet or Text When Changes are Made If you’re a hyper-connected individual, chances are you send and receive tweets on the go.  If so, this might be the best way for you to get notified when changes are made to your Dropbox shared folder.  To do this, first create a new Twitter account to publish your changes through.  If you don’t want the whole world to see your updates, click Settings and set your new Twitter account to Private. Once the new account is created, follow it with your normal Twitter account so you’ll see updates. Now, let’s publish our Dropbox RSS feed to Twitter.  Create an account with Twitterfeed (link below). Once your account is setup, add your feed to it.  Name your feed, and enter your Feed address from Dropbox.  Click Advanced Settings to make your feed work just like you want. In Advanced Settings, change the frequency to “Every 30 mins” to make sure you’re updated on changes as quick as possible.  You can also change other settings if you like. Click “Continue to Step 2”, and then click Twitter under the available services to add your account. Make sure your signed into your new Twitter account, and then click Authenticate Twitter. Allow the application. Now, finally, click Create Service. Whenever a change is made, you will receive a tweet via your new Twitter account.  And since you can receive tweets via text message or many mobile applications, you’ll never be very far away from your Dropbox changes!   Conclusion Dropbox shared folders are a great way to keep your whole team working together on the same files in a project.  And with these handy tricks, you can keep up with your shared files wherever you are! There are a lot of cool things you can do with Dropbox make sure to check out our posts on adding Dropbox to the Windows 7 Start menu, Accessing Dropbox files from Chrome, and Syncing your Pidgin Profile Across Multiple PCs. Links Signup or access your Dropbox account Google Reader Tweet your feed with Twitterfeed Similar Articles Productive Geek Tips How to Add and Manage Shared Folders on Windows Home ServerManage User Accounts in Windows Home ServerAdd "My Dropbox" to Your Windows 7 Start MenuComplete Guide to Networking Windows 7 with XP and VistaMoving Your Personal Data Folders in Windows Vista the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • Process.Start() and ShellExecute() fails with URLs on Windows 8

    - by Rick Strahl
    Since I installed Windows 8 I've noticed that a number of my applications appear to have problems opening URLs. That is when I click on a link inside of a Windows application, either nothing happens or there's an error that occurs. It's happening both to my own applications and a host of Windows applications I'm running. At first I thought this was an issue with my default browser (Chrome) but after switching the default browser to a few others and experimenting a bit I noticed that the errors occur - oddly enough - only when I run an application as an Administrator. I also tried switching to FireFox and Opera as my default browser and saw exactly the same behavior. The scenario for this is a bit bizarre: Running on Windows 8 Call Process.Start() (or ShellExecute() in Win32 API) with a URL or an HTML file Run 'As Administrator' (works fine under non-elevated user account!) or with UAC off A browser other than Internet Explorer is set as your Default Web Browser Talk about a weird scenario: Something that doesn't work when you run as an Administrator which is supposed to have rights to everything on the system! Instead running under an Admin account - either elevated with a User Account Control prompt or even when running as a full Administrator fails. It appears that this problem does not occur for everyone, but when I looked for a solution to this, I saw quite a few posts in relation to this with no clear resolutions. I have three Windows 8 machines running here in the office and all three of them showed this behavior. Lest you think this is just a programmer's problem - this can affect any software running on your system that needs to run under administrative rights. Try it out Now, in order for this next example to fail, any browser but Internet Explorer has to be your default browser and even then it may not fail depending on how you installed your browser. To see if this is a problem create a small Console application and call Process.Start() with a URL in it:namespace Win8ShellBugConsole { class Program { static void Main(string[] args) { Console.WriteLine("Launching Url..."); Process.Start("http://microsoft.com"); Console.Write("Press any key to continue..."); Console.ReadKey(); Console.WriteLine("\r\n\r\nLaunching image..."); Process.Start(Path.GetFullPath(@"..\..\sailbig.jpg")); Console.Write("Press any key to continue..."); Console.ReadKey(); } } } Compile this code. Then execute the code from Explorer (not from Visual Studio because that may change the permissions). If you simply run the EXE and you're not running as an administrator, you'll see the Web page pop up in the browser as well as the image loading. Now run the same thing with Run As Administrator: Now when you run it you get a nice error when Process.Start() is fired: The same happens if you are running with User Account Control off altogether - ie. you are running as a full admin account. Now if you comment out the URL in the code above and just fire the image display - that works just fine in any user mode. As does opening any other local file type or even starting a new EXE locally (ie. Process.Start("c:\windows\notepad.exe"). All that works, EXCEPT for URLs. The code above uses Process.Start() in .NET but the same happens in Win32 Applications that use the ShellExecute API. In some of my older Fox apps ShellExecute returns an error code of 31 - which is No Shell Association found. What's the Deal? It turns out the problem has to do with the way browsers are registering themselves on Windows. Internet Explorer - being a built-in application in Windows 8 - apparently does this correctly, but other browsers possibly don't or at least didn't at the time I installed them. So even Chrome, which continually updates itself, has a recent version that apparently has this registration issue fixed, I was unable to simply set IE as my default browser then use Chrome to 'Set as Default Browser'. It still didn't work. Neither did using the Set Program Associations dialog which lets you assign what extensions are mapped to by a given application. Each application provides a set of extension/moniker mappings that it supports and this dialog lets you associate them on a system wide basis. This also did not work for Chrome or any of the other browsers at first. However, after repeated retries here eventually I did manage to get FireFox to work, but not any of the others. What Works? Reinstall the Browser In the end I decided on the hard core pull the plug solution: Totally uninstall and re-install Chrome in this case. And lo and behold, after reinstall everything was working fine. Now even removing the association for Chrome, switching to IE as the default browser and then back to Chrome works. But, even though the version of Chrome I was running before uninstalling and reinstalling is the same as I'm running now after the reinstall now it works. Of course I had to find out the hard way, before Richard commented with a note regarding what the issue is with Chrome at least: http://code.google.com/p/chromium/issues/detail?id=156400 As expected the issue is a registration issue - with keys not being registered at the machine level. Reading this I'm still not sure why this should be a problem - an elevated account still runs under the same user account (ie. I'm still rickstrahl even if I Run As Administrator), so why shouldn't an app be able to read my Current User registry hive? And also that doesn't quite explain why if I register the extensions using Run As Administrator in Chrome when using Set as Default Browser). But in the end it works… Not so fast It's now a couple of days later and still there are some oddball problems although this time they appear to be purely Chrome issues. After the reinstall Chrome seems to pop up properly with ShellExecute() calls both in regular user and Admin mode. However, it now looks like Chrome is actually running two completely separate user profiles for each. For example, when I run Visual Studio in Admin mode and go to View in browser, Chrome complains that it was installed in Admin mode and can't launch (WTF?). Then you retry a few times later and it ends up working. When launched that way some of the plug-ins installed don't show up with the effect that sometimes they're visible sometimes they're not. Also Chrome seems to loose my configuration and Google sign in between sessions now, presumably when switching user modes. Add-ins installed in admin mode don't show up in user mode and vice versa. Ah, this is lovely. Did I mention that I freaking hate UAC precisely because of this kind of bullshit. You can never tell exactly what account your app is running under, and apparently apps also have a hard time trying to put data into the right place that works for both scenarios. And as my recent post on using Windows Live accounts shows it's yet another level of abstraction ontop of the underlying system identity that can cause all sort of small side effect headaches like this. Hopefully, most of you are skirting this issue altogether - having installed more recent versions of your favorite browsers. If not, hopefully this post will take you straight to reinstallation to fix this annoying issue.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A New Threat To Web Applications: Connection String Parameter Pollution (CSPP)

    - by eric.maurice
    Hi, this is Shaomin Wang. I am a security analyst in Oracle's Security Alerts Group. My primary responsibility is to evaluate the security vulnerabilities reported externally by security researchers on Oracle Fusion Middleware and to ensure timely resolution through the Critical Patch Update. Today, I am going to talk about a serious type of attack: Connection String Parameter Pollution (CSPP). Earlier this year, at the Black Hat DC 2010 Conference, two Spanish security researchers, Jose Palazon and Chema Alonso, unveiled a new class of security vulnerabilities, which target insecure dynamic connections between web applications and databases. The attack called Connection String Parameter Pollution (CSPP) exploits specifically the semicolon delimited database connection strings that are constructed dynamically based on the user inputs from web applications. CSPP, if carried out successfully, can be used to steal user identities and hijack web credentials. CSPP is a high risk attack because of the relative ease with which it can be carried out (low access complexity) and the potential results it can have (high impact). In today's blog, we are going to first look at what connection strings are and then review the different ways connection string injections can be leveraged by malicious hackers. We will then discuss how CSPP differs from traditional connection string injection, and the measures organizations can take to prevent this kind of attacks. In web applications, a connection string is a set of values that specifies information to connect to backend data repositories, in most cases, databases. The connection string is passed to a provider or driver to initiate a connection. Vendors or manufacturers write their own providers for different databases. Since there are many different providers and each provider has multiple ways to make a connection, there are many different ways to write a connection string. Here are some examples of connection strings from Oracle Data Provider for .Net/ODP.Net: Oracle Data Provider for .Net / ODP.Net; Manufacturer: Oracle; Type: .NET Framework Class Library: - Using TNS Data Source = orcl; User ID = myUsername; Password = myPassword; - Using integrated security Data Source = orcl; Integrated Security = SSPI; - Using the Easy Connect Naming Method Data Source = username/password@//myserver:1521/my.server.com - Specifying Pooling parameters Data Source=myOracleDB; User Id=myUsername; Password=myPassword; Min Pool Size=10; Connection Lifetime=120; Connection Timeout=60; Incr Pool Size=5; Decr Pool Size=2; There are many variations of the connection strings, but the majority of connection strings are key value pairs delimited by semicolons. Attacks on connection strings are not new (see for example, this SANS White Paper on Securing SQL Connection String). Connection strings are vulnerable to injection attacks when dynamic string concatenation is used to build connection strings based on user input. When the user input is not validated or filtered, and malicious text or characters are not properly escaped, an attacker can potentially access sensitive data or resources. For a number of years now, vendors, including Oracle, have created connection string builder class tools to help developers generate valid connection strings and potentially prevent this kind of vulnerability. Unfortunately, not all application developers use these utilities because they are not aware of the danger posed by this kind of attacks. So how are Connection String parameter Pollution (CSPP) attacks different from traditional Connection String Injection attacks? First, let's look at what parameter pollution attacks are. Parameter pollution is a technique, which typically involves appending repeating parameters to the request strings to attack the receiving end. Much of the public attention around parameter pollution was initiated as a result of a presentation on HTTP Parameter Pollution attacks by Stefano Di Paola and Luca Carettoni delivered at the 2009 Appsec OWASP Conference in Poland. In HTTP Parameter Pollution attacks, an attacker submits additional parameters in HTTP GET/POST to a web application, and if these parameters have the same name as an existing parameter, the web application may react in different ways depends on how the web application and web server deal with multiple parameters with the same name. When applied to connections strings, the rule for the majority of database providers is the "last one wins" algorithm. If a KEYWORD=VALUE pair occurs more than once in the connection string, the value associated with the LAST occurrence is used. This opens the door to some serious attacks. By way of example, in a web application, a user enters username and password; a subsequent connection string is generated to connect to the back end database. Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; In the password field, if the attacker enters "xxx; Integrated Security = true", the connection string becomes, Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; Intergrated Security = true; Under the "last one wins" principle, the web application will then try to connect to the database using the operating system account under which the application is running to bypass normal authentication. CSPP poses serious risks for unprepared organizations. It can be particularly dangerous if an Enterprise Systems Management web front-end is compromised, because attackers can then gain access to control panels to configure databases, systems accounts, etc. Fortunately, organizations can take steps to prevent this kind of attacks. CSPP falls into the Injection category of attacks like Cross Site Scripting or SQL Injection, which are made possible when inputs from users are not properly escaped or sanitized. Escaping is a technique used to ensure that characters (mostly from user inputs) are treated as data, not as characters, that is relevant to the interpreter's parser. Software developers need to become aware of the danger of these attacks and learn about the defenses mechanism they need to introduce in their code. As well, software vendors need to provide templates or classes to facilitate coding and eliminate developers' guesswork for protecting against such vulnerabilities. Oracle has introduced the OracleConnectionStringBuilder class in Oracle Data Provider for .NET. Using this class, developers can employ a configuration file to provide the connection string and/or dynamically set the values through key/value pairs. It makes creating connection strings less error-prone and easier to manager, and ultimately using the OracleConnectionStringBuilder class provides better security against injection into connection strings. For More Information: - The OracleConnectionStringBuilder is located at http://download.oracle.com/docs/cd/B28359_01/win.111/b28375/OracleConnectionStringBuilderClass.htm - Oracle has developed a publicly available course on preventing SQL Injections. The Server Technologies Curriculum course "Defending Against SQL Injection Attacks!" is located at http://st-curriculum.oracle.com/tutorial/SQLInjection/index.htm - The OWASP web site also provides a number of useful resources. It is located at http://www.owasp.org/index.php/Main_Page

    Read the article

  • Reg Gets a Job at Red Gate (and what happens behind the scenes)

    - by red(at)work
    Mr Reg Gater works at one of Cambridge’s many high-tech companies. He doesn’t love his job, but he puts up with it because... well, it could be worse. Every day he drives to work around the Red Gate roundabout, wondering what his boss is going to blame him for today, and wondering if there could be a better job out there for him. By late morning he already feels like handing his notice in. He got the hacky look from his boss for being 5 minutes late, and then they ran out of tea. Again. He goes to the local sandwich shop for lunch, and picks up a Red Gate job menu and a Book of Red Gate while he’s waiting for his order. That night, he goes along to Cambridge Geek Nights and sees some very enthusiastic Red Gaters talking about the work they do; it sounds interesting and, of all things, fun. He takes a quick look at the job vacancies on the Red Gate website, and an hour later realises he’s still there – looking at videos, photos and people profiles. He especially likes the Red Gate’s Got Talent page, and is very impressed with Simon Johnson’s marathon time. He thinks that he’d quite like to work with such awesome people. It just so happens that Red Gate recently decided that they wanted to hire another hot shot team member. Behind the scenes, the wheels were set in motion: the recruitment team met with the hiring manager to understand exactly what they’re looking for, and to decide what interview tests to do, who will do the interviews, and to kick-start any interview training those people might need. Next up, a job description and job advert were written, and the job was put on the market. Reg applies, and his CV lands in the Recruitment team’s inbox and they open it up with eager anticipation that Reg could be the next awesome new starter. He looks good, and in a jiffy they’ve arranged an interview. Reg arrives for his interview, and is greeted by a smiley receptionist. She offers him a selection of drinks and he feels instantly relaxed. A couple of interviews and an assessment later, he gets a job offer. We make his day and he makes ours by accepting, and becoming one of the 60 new starters so far this year. Behind the scenes, things start moving all over again. The HR team arranges for a “Welcome” goodie box to be whisked out to him, prepares his contract, sends an email to Information Services (Or IS for short - we’ll come back to them), keeps in touch with Reg to make sure he knows what to expect on his first day, and of course asks him to fill in the all-important wiki questionnaire so his new colleagues can start to get to know him before he even joins. Meanwhile, the IS team see an email in SupportWorks from HR. They see that Reg will be starting in the sales team in a few days’ time, and they know exactly what to do. They pull out a new machine, and within minutes have used their automated deployment software to install every piece of software that a new recruit could ever need. They also check with Reg’s new manager to see if he has any special requirements that they could help with. Reg starts and is amazed to find a fully configured machine sitting on his desk, complete with stationery and all the other tools he’ll need to do his job. He feels even more cared for after he gets a workstation assessment, and realises he’d be comfier with an ergonomic keyboard and a footstool. They arrive minutes later, just like that. His manager starts him off on his induction and sales training. Along with job-specific training, he’ll also have a buddy to help him find his feet, and loads of pre-arranged demos and introductions. Reg settles in nicely, and is great at his job. He enjoys the canteen, and regularly eats one of the 40,000 meals provided each year. He gets used to the selection of teas that are available, develops a taste for champagne launch parties, and has his fair share of the 25,000 cups of coffee downed at Red Gate towers each year. He goes along to some Feel Good Fund events, and donates a little something to charity in exchange for a turn on the chocolate fountain. He’s looking a little scruffy, so he decides to get his hair cut in between meetings, just in time for the Red Gate birthday company photo. Reg starts a new project: identifying existing customers to up-sell to new bundles. He talks with the web team to generate lists of qualifying customers who haven’t recently been sent marketing emails, and sends emails out, using a new in-house developed tool to schedule follow-up calls in CRM for the same group. The customer responds, saying they’d like to upgrade but are having a licensing problem – Reg sends the issue to Support, and it gets routed to the web team. The team identifies a workaround, and the bug gets scheduled into the next maintenance release in a fortnight’s time (hey; they got lucky). With all the new stuff Reg is working on, he realises that he’d be way more efficient if he had a third monitor. He speaks to IS and they get him one - no argument. He also needs a test machine and then some extra memory. Done. He then thinks he needs an iPad, and goes to ask for one. He gets told to stop pushing his luck. Some time later, Reg’s wife has a baby, so Reg gets 2 weeks of paid paternity leave and a bunch of flowers sent to his house. He signs up to the childcare scheme so that he doesn’t have to pay National Insurance on the first £243 of his childcare. The accounts team makes it all happen seamlessly, as they did with his Give As You Earn payments, which come out of his wages and go straight to his favorite charity. Reg’s sales career is going well. He’s grateful for the help that he gets from the product support team. How do they answer all those 900-ish support calls so effortlessly each month? He’s impressed with the patches that are sent out to customers who find “interesting behavior” in their tools, and to the customers who just must have that new feature. A little later in his career at Red Gate, Reg decides that he’d like to learn about management. He goes on some management training specially customised for Red Gate, joins the Management Book Club, and gets together with other new managers to brainstorm how to get the most out of one to one meetings with his team. Reg decides to go for a game of Foosball to celebrate his good fortune with his team, and has to wait for Finance to finish. While he’s waiting, he reflects on the wonderful time he’s had at Red Gate. He can’t put his finger on what it is exactly, but he knows he’s on to a good thing. All of the stuff that happened to Reg didn’t just happen magically. We’ve got teams of people working relentlessly behind the scenes to make sure that everyone here is comfortable, safe, well fed and caffeinated to the max.

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120  | Next Page >